Ändra sökning
Avgränsa sökresultatet
333435363738 1751 - 1800 av 1879
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1751.
    Wiberg, Viktor
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för fysik.
    Terrain machine learning: A predictive method for estimating terrain model parameters using simulated sensors, vehicle and terrain2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Predicting terrain trafficability of deformable terrain is a difficult task with applications in e.g, forestry, agriculture, exploratory missions. The currently used techniques are neither practical, efficient, nor sufficiently accurate and inadequate for certain soil types. An online method which predicts terrain trafficability is of interest for any vehicle with purpose to reduce ground damage, improve steering and increase mobility. This thesis presents a novel approach for predicting the model parameters used in modelling a virtual terrain. The model parameters include particle stiffness, tangential friction, rolling resistance and two parameters related to particle plasticity and adhesion. Using multi-body dynamics, both vehicle and terrain can be simulated, which allows for an efficient exploration of a great variety of terrains. A vehicle with access to certain sensors can frequently gather sensor data providing information regarding vehicle-terrain interaction. The proposed method develops a statistical model which uses the sensor data in predicting the terrain model parameters. However, these parameters are specified at model particle level and do not directly explain bulk properties measurable on a real terrain. Simulations were carried out of a single tracked bogie constrained to move in one direction when traversing flat, homogeneous terrains. The statistical model with best prediction accuracy was ridge regression using polynomial features and interaction terms of second degree. The model proved capable of predicting particle stiffness, tangential friction and particle plasticity, with moderate accuracy. However, it was deduced that the current predictors and training scenarios were insufficient in estimating particle adhesion and rolling resistance. Nevertheless, this thesis indicates that it should be possible to develop a method which successfully predicts terrain model properties.

  • 1752.
    Widebäck West, Nikolaus
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Multiple Session 3D Reconstruction using RGB-D Cameras2014Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In this thesis we study the problem of multi-session dense rgb-d slam for 3D reconstruc- tion. Multi-session reconstruction can allow users to capture parts of an object that could not easily be captured in one session, due for instance to poor accessibility or user mistakes. We first present a thorough overview of single-session dense rgb-d slam and describe the multi-session problem as a loosening of the incremental camera movement and static scene assumptions commonly held in the single-session case. We then implement and evaluate sev- eral variations on a system for doing two-session reconstruction as an extension to a single- session dense rgb-d slam system.

    The extension from one to several sessions is divided into registering separate sessions into a single reference frame, re-optimizing the camera trajectories, and fusing together the data to generate a final 3D model. Registration is done by matching reconstructed models from the separate sessions using one of two adaptations on a 3D object detection pipeline. The registration pipelines are evaluated with many different sub-steps on a challenging dataset and it is found that robust registration can be achieved using the proposed methods on scenes without degenerate shape symmetry. In particular we find that using plane matches between two sessions as constraints for as much as possible of the registration pipeline improves results.

    Several different strategies for re-optimizing camera trajectories using data from both ses- sions are implemented and evaluated. The re-optimization strategies are based on re- tracking the camera poses from all sessions together, and then optionally optimizing over the full problem as represented on a pose-graph. The camera tracking is done by incrementally building and tracking against a tsdf volume, from which a final 3D mesh model is extracted. The whole system is qualitatively evaluated against a realistic dataset for multi-session re- construction. It is concluded that the overall approach is successful in reconstructing objects from several sessions, but that other fine grained registration methods would be required in order to achieve multi-session reconstructions that are indistinguishable from singe-session results in terms of reconstruction quality. 

  • 1753.
    Wiedemann, Thomas
    et al.
    German Aerospace Center, Oberpfaffenhofen, Germany.
    Shutin, Dmitriy
    German Aerospace Center, Oberpfaffenhofen, Germany.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Model-based gas source localization strategy for a cooperative multi-robot system-A probabilistic approach and experimental validation incorporating physical knowledge and model uncertainties2019Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 118, s. 66-79Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Sampling gas distributions by robotic platforms in order to find gas sources is an appealing approach to alleviate threats for a human operator. Different sampling strategies for robotic gas exploration exist. In this paper we investigate the benefit that could be obtained by incorporating physical knowledge about the gas dispersion. By exploring a gas diffusion process using a multi-robot system. The physical behavior of the diffusion process is modeled using a Partial Differential Equation (PDE) which is integrated into the exploration strategy. It is assumed that the diffusion process is driven by only a few spatial sources at unknown locations with unknown intensity. The objective of the exploration strategy is to guide the robots to informative measurement locations and by means of concentration measurements estimate the source parameters, in particular, their number, locations and magnitudes. To this end we propose a probabilistic approach towards PDE identification under sparsity constraints using factor graphs and a message passing algorithm. Moreover, message passing schemes permit efficient distributed implementation of the algorithm, which makes it suitable for a multi-robot system. We designed an experimental setup that allows us to evaluate the performance of the exploration strategy in hardware-in-the-loop experiments as well as in experiments with real ethanol gas under laboratory conditions. The results indicate that the proposed exploration approach accelerates the identification of the source parameters and outperforms systematic sampling. (C) 2019 Elsevier B.V. All rights reserved.

  • 1754.
    Wikander, Gustav
    Linköpings universitet, Institutionen för systemteknik.
    Three dimensional object recognition for robot conveyor picking2009Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [en]

    Shape-based matching (SBM) is a method for matching objects in greyscale images. It extracts edges from search images and matches them to a model using a similarity measure. In this thesis we extend SBM to find the tilt and height position of the object in addition to the z-plane rotation and x-y-position. The search is conducted using a scale pyramid to improve the search speed. A 3D matching can be done for small tilt angles by using SBM on height data and extending it with additional steps to calculate the tilt of the object. The full pose is useful for picking objects with an industrial robot.

    The tilt of the object is calculated using a RANSAC plane estimator. After the 2D search the differences in height between all corresponding points of the model and the live image are calculated. By estimating a plane to this difference the tilt of the object can be calculated. Using the tilt the model edges are tilted in order to improve the matching at the next scale level.

    The problems that arise with occlusion and missing data have been studied. Missing data and erroneous data have been thresholded manually after conducting tests where automatic filling of missing data did not noticeably improve the matching. The automatic filling could introduce new false edges and remove true ones, thus lowering the score.

    Experiments have been conducted where objects have been placed at increasing tilt angles. The results show that the matching algorithm is object dependent and correct matches are almost always found for tilt angles less than 10 degrees. This is very similar to the original 2D SBM because the model edges does not change much for such small angels. For tilt angles up to about 25 degrees most objects can be matched and for nice objects correct matches can be done at large tilt angles of up to 40 degrees.

  • 1755.
    Wilkinson, Tomas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Learning based Word Search and Visualisation for Historical Manuscript Images2019Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Today, work with historical manuscripts is nearly exclusively done manually, by researchers in the humanities as well as laypeople mapping out their personal genealogy. This is a highly time consuming endeavour as it is not uncommon to spend months with the same volume of a few hundred pages. The last few decades have seen an ongoing effort to digitise manuscripts, both preservation purposes and to increase accessibility. This has the added effect of enabling the use methods and algorithms from Image Analysis and Machine Learning that have great potential in both making existing work more efficient and creating new methodologies for manuscript-based research.

    The first part of this thesis focuses on Word Spotting, the task of searching for a given text query in a manuscript collection. This can be broken down into two tasks, detecting where the words are located on the page, and then ranking the words according to their similarity to a search query. We propose Deep Learning models to do both, separately and then simultaneously, and successfully search through a large manuscript collection consisting of over a hundred thousand pages.

    A limiting factor in applying learning-based methods to historical manuscript images is the cost, and therefore, lack of annotated data needed to train machine learning models. We propose several ways to mitigate this problem, including generating synthetic data, augmenting existing data to get better value from it, and learning from pre-existing, partially annotated data that was previously unusable.

    In the second part, a method for visualising manuscript collections called the Image-based Word Cloud is proposed. Much like it text-based counterpart, it arranges the most representative words in a collection into a cloud, where the size of the words are proportional to their frequency of occurrence. This grants a user a single image overview of a manuscript collection, regardless of its size. We further propose a way to estimate a manuscripts production date. This can grant historians context that is crucial for correctly interpreting the contents of a manuscript.

  • 1756.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    A novel word segmentation method based on object detection and deep learning2015Ingår i: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, s. 231-240Konferensbidrag (Refereegranskat)
    Abstract [en]

    The segmentation of individual words is a crucial step in several data mining methods for historical handwritten documents. Examples of applications include visual searching for query words (word spotting) and character-by-character text recognition. In this paper, we present a novel method for word segmentation that is adapted from recent advances in computer vision, deep learning and generic object detection. Our method has unique capabilities and it has found practical use in our current research project. It can easily be trained for different kinds of historical documents, uses full gray scale information, does not require binarization as pre-processing or prior segmentation of individual text lines. We evaluate its performance using established error metrics, previously used in competitions for word segmentation, and demonstrate its usefulness for a 15th century handwritten document.

  • 1757.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Semantic and Verbatim Word Spotting using Deep Neural Networks2016Ingår i: Proceedings Of 2016 15Th International Conference On Frontiers In Handwriting Recognition (Icfhr), 2016, s. 307-312Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform wordspotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.

  • 1758.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Visualizing document image collections using image-based word clouds2015Ingår i: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, s. 297-306Konferensbidrag (Refereegranskat)
  • 1759.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindström, Jonas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Historisk-filosofiska fakulteten, Historiska institutionen.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Neural Ctrl-F: Segmentation-free Query-by-String Word Spotting in Handwritten Manuscript Collections2017Konferensbidrag (Övrigt vetenskapligt)
  • 1760.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindström, Jonas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Historisk-filosofiska fakulteten, Historiska institutionen.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Neural Ctrl-F: Segmentation-free query-by-string word spotting in handwritten manuscript collections2017Ingår i: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, s. 4443-4452Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we approach the problem of segmentation-free query-by-string word spotting for handwritten documents. In other words, we use methods inspired from computer vision and machine learning to search for words in large collections of digitized manuscripts. In particular, we are interested in historical handwritten texts, which are often far more challenging than modern printed documents. This task is important, as it provides people with a way to quickly find what they are looking for in large collections that are tedious and difficult to read manually. To this end, we introduce an end-to-end trainable model based on deep neural networks that we call Ctrl-F-Net. Given a full manuscript page, the model simultaneously generates region proposals, and embeds these into a distributed word embedding space, where searches are performed. We evaluate the model on common benchmarks for handwritten word spotting, outperforming the previous state-of-the-art segmentation-free approaches by a large margin, and in some cases even segmentation-based approaches. One interesting real-life application of our approach is to help historians to find and count specific words in court records that are related to women's sustenance activities and division of labor. We provide promising preliminary experiments that validate our method on this task.

  • 1761.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Lindström, Jonas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Historisk-filosofiska fakulteten, Historiska institutionen.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Neural Word Search in Historical Manuscript CollectionsManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    We address the problem of segmenting and retrieving word images in collections of historical manuscripts given a text query. This is commonly referred to as "word spotting". To this end, we first propose an end-to-end trainable model based on deep neural networks that we dub Ctrl-F-Net. The model simultaneously generates region proposals and embeds them into a word embedding space, wherein a search is performed. We further introduce a simplified version called Ctrl-F-Mini. It is faster with similar performance, though it is limited to more easily segmented manuscripts. We evaluate both models on common benchmark datasets and surpass the previous state of the art. Finally, in collaboration with historians, we employ the Ctrl-F-Net to search within a large manuscript collection of over 100 thousand pages, written across two centuries. With only 11 training pages, we enable large scale data collection in manuscript-based historical research. This results in a speed up of data collection and the number of manuscripts processed by orders of magnitude. Given the time consuming manual work required to study old manuscripts in the humanities, quick and robust tools for word spotting has the potential to revolutionise domains like history, religion and language.

  • 1762.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Nettelblad, Carl
    Bootstrapping Weakly Supervised Segmentation-free Word Spotting through HMM-based AlignmentManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Recent work in word spotting in handwritten documents has yielded impressive results. Yet this progress has largely been made by supervised learning systems which are dependant on manually annotated data, making deployment to new collections a significant effort. In this paper we propose an approach utilising transcriptions without bounding box annotations to train segmentation-free word spotting models, given a model partially trained with full annotations. This is done through an alignment procedure based on hidden Markov models. This model can create a tentative mapping between word region proposals and the transcriptions to automatically create additional weakly annotated training data. Using as little as 1% and 10% of the fully annotated training sets for partial convergence, we automatically annotate the remaining training data and successfully train using it. Across all datasets, our approach comes within a few mAP% of achieving the same performance as a model trained with only full ground truth. We believe that this will be a significant advance towards a more general use of word spotting, since digital transcription data will already exist for parts of many collections of interest.

  • 1763. Wiltschi, Klaus
    et al.
    Pinz, Axel
    Lindeberg, Tony
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA.
    Classification of Carbide Distributions using Scale Selection and Directional Distributions1997Ingår i: Proc. 4th International Conference on Image Processing: ICIP'97, 1997, Vol. II, s. 122-125Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an automatic system for steel quality assessment, by measuring textural properties of carbide distributions. In current steel inspection, specially etched and polished steel specimen surfaces are classified manually under a light microscope, by comparisons with a standard chart. This procedure is basically two-dimensional, reflecting the size of the carbide agglomerations and their directional distribution. To capture these textural properties in terms of image features, we first apply a rich set of image-processing operations, including mathematical morphology, multi-channel Gabor filtering, and the computation of texture measures with automatic scale selection in linear scale-space. Then, a feature selector is applied to a 40-dimensional feature space, and a classification scheme is defined, which on a sample set of more than 400 images has classification performance values comparable to those of human metallographers. Finally, a fully automatic inspection system is designed, which actively selects the most salient carbide structure on the specimen surface for subsequent classification. The feasibility of the overall approach for future use in the production process is demonstrated by a prototype system. It is also shown how the presented classification scheme allows for the definition of a new reference chart in terms of quantitative measures.

  • 1764.
    Winkler Pettersson, Lars
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Människa-datorinteraktion.
    Kjellin, Andreas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Samhällsvetenskapliga fakulteten, Institutionen för informationsvetenskap, Människa-datorinteraktion.
    Lind, Mats
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Samhällsvetenskapliga fakulteten, Institutionen för informationsvetenskap, Människa-datorinteraktion.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Evaluating Collaborative Visualization of Spatial Data in Multi-Viewer Displays2008Manuskript (preprint) (Övrigt vetenskapligt)
  • 1765.
    Winkler Pettersson, Lars
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Människa-datorinteraktion.
    Kjellin, Andreas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Samhällsvetenskapliga fakulteten, Institutionen för informatik och media, Människa-datorinteraktion.
    Lind, Mats
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Samhällsvetenskapliga fakulteten, Institutionen för informatik och media, Människa-datorinteraktion.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    On the role of visual references in collaborative visualization2010Ingår i: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 9, nr 2, s. 98-114Artikel i tidskrift (Refereegranskat)
  • 1766.
    Winkler Pettersson, Lars
    et al.
    Informationsteknologi, Uppsala universitet.
    Kjellin, Andreas
    Informationsvetenskap, Uppsala universitet.
    Lind, Mats
    Informationsvetenskap, Uppsala universitet.
    Seipel, Stefan
    Informationsvetenskap, Uppsala universitet.
    On the role of visual references in collaborative visualization2009Ingår i: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 9, nr 2, s. 98-114Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Multi-Viewer Display Environments (MVDE) provide unique opportunities to present personalized information to several users concurrently in the same physical display space. MVDEs can support correct 3D visualizations to multiple users, present correctly oriented text and symbols to all viewers and allow individually chosen subsets of information in a shared context. MVDEs aim at supporting collaborative visual analysis, and when used to visualize disjoint information in partitioned visualizations they even necessitate collaboration. When solving visual tasks collaboratively in a MVDE, overall performance is affected not only by the inherent effects of the graphical presentation but also by the interaction between the collaborating users.

    We present results from an empirical study where we compared views with lack of shared visual references in disjoint sets of information to views with mutually shared information. Potential benefits of 2D and 3D visualizations in a collaborative task were investigated and the effects of partitioning visualizations both in terms of task performance, interaction behavior and clutter reduction. In our study of a collaborative task that required only a minimum of information to be shared, we found that partitioned views with a lack of shared visual references were significantly less efficient than integrated views. However, the study showed that subjects were equally capable of solving the task at low error levels in partitioned and integrated views. An explorative analysis revealed that the amount of visual clutter was reduced heavily in partitioned visualization, whereas verbal and deictic communication between subjects increased. It also showed that the type of the visualization (2D/3D) affects interaction behavior strongly. An interesting result is that collaboration on complex geo-time visualizations is actually as efficient in 2D as in 3D.

  • 1767.
    Winkler Pettersson, Lars
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Människa-datorinteraktion.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Collaborative Pixel-Accurate Interaction with PixelActiveSurface2007Konferensbidrag (Övrigt vetenskapligt)
  • 1768.
    Wood, John
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Statistical Background Models with Shadow Detection for Video Based Tracking2007Självständigt arbete på grundnivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [en]

    A common problem when using background models to segment moving objects from video sequences is that objects cast shadow usually significantly differ from the background and therefore get detected as foreground. This causes several problems when extracting and labeling objects, such as object shape distortion and several objects merging together. The purpose of this thesis is to explore various possibilities to handle this problem.

    Three methods for statistical background modeling are reviewed. All methods work on a per pixel basis, the first is based on approximating the median, the next on using Gaussian mixture models, and the last one is based on channel representation. It is concluded that all methods detect cast shadows as foreground.

    A study of existing methods to handle cast shadows has been carried out in order to gain knowledge on the subject and get ideas. A common approach is to transform the RGB-color representation into a representation that separates color into intensity and chromatic components in order to determine whether or not newly sampled pixel-values are related to the background. The color spaces HSV, IHSL, CIELAB, YCbCr, and a color model proposed in the literature (Horprasert et al.) are discussed and compared for the purpose of shadow detection. It is concluded that Horprasert's color model is the most suitable for this purpose.

    The thesis ends with a proposal of a method to combine background modeling using Gaussian mixture models with shadow detection using Horprasert's color model. It is concluded that, while not perfect, such a combination can be very helpful in segmenting objects and detecting their cast shadow.

  • 1769.
    Wretstam, Oskar
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Infrared image-based modeling and rendering2017Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Bildbaserad modellering med visuella bilder har genomgått en stor utveckling under de tidigare delarna av 2000-talet. Givet en sekvens bestående av vanliga tvådimensionella bilder på en scen från olika perspektiv så är målet att rekonstruera en tredimensionell modell. I denna avhandling implementeras och testas ett system för automatiserad okalibrerad scenrekonstruktion från infraröda bilder. Okalibrerad rekonstruktion refererar till det faktum att parametrar för kameran, såsom fokallängd och fokus, är okända och enbart bilder används som indata till systemet. Ett stort användingsområde för värmekameror är inspektion. Temperaturskillnader i en bild kan indikera till exempel dålig isolering eller hög friktion. Om ett automatiserat system kan skapa en tredimensionell modell av en scen så kan det bidra till att förenkla inspektion samt till att ge en bättre överblick. Värmebilder kommer generellt att ha lägre upplösning, mindre kontrast och mindre högfrekvensinnehåll jämfört med visuella bilder. Dessa egenskaper hos infraröda bilder komplicerar extraktion och matchning av punkter i bilderna vilket är viktiga steg i rekonstruktionen. För att åtgärda komplikationen förbehandlas bilderna innan rekonstruktionen, ett urval av metoder för förbehandling har testats. Rekonstruktion med värmebilder kommer också att ställa ytterligare krav på rekonstruktionen, detta eftersom det är viktigt att bibehålla termisk noggrannhet från bilderna i modellen. Tre huvudresultat erhålls från denna avhandling. För det första är det möjligt att beräkna kamerakalibrering och position såväl som en gles rekonstruktion från en infraröd bildsekvens, detta med implementationen som föreslås i denna avhandling. För det andra presenteras och analyseras korrelationen för temperaturmätningar i bilderna som används för rekonstruktionen. Slutligen så visar den testade förbehandlingen inte en förbättring av rekonstruktionen som är propotionerlig med den ökade beräkningskomplexiteten.

  • 1770. Wyatt, Jeremy L.
    et al.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Brenner, Michael
    Hanheide, Marc
    Hawes, Nick
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kristan, Matej
    Kruijff, Geert-Jan M.
    Lison, Pierre
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Vrecko, Alen
    Zender, Hendrik
    Zillich, Michael
    Skocaj, Danijel
    Self-Understanding and Self-Extension: A Systems and Representational Approach2010Ingår i: IEEE T AUTON MENT DE, ISSN 1943-0604, Vol. 2, nr 4, s. 282-303Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.

  • 1771.
    Wzorek, Mariusz
    Linköpings universitet, Institutionen för datavetenskap, UASTECH – Teknologier för autonoma obemannade flygande farkoster. Linköpings universitet, Tekniska högskolan.
    Selected Aspects of Navigation and Path Planning in Unmanned Aircraft Systems2011Licentiatavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Unmanned aircraft systems (UASs) are an important future technology with early generations already being used in many areas of application encompassing both military and civilian domains. This thesis proposes a number of integration techniques for combining control-based navigation with more abstract path planning functionality for UASs. These techniques are empirically tested and validated using an RMAX helicopter platform used in the UASTechLab at Linköping University. Although the thesis focuses on helicopter platforms, the techniques are generic in nature and can be used in other robotic systems.

    At the control level a navigation task is executed by a set of control modes. A framework based on the abstraction of hierarchical concurrent state machines for the design and development of hybrid control systems is presented. The framework is used to specify  reactive behaviors and for sequentialisation of control modes. Selected examples of control systems deployed on UASs are presented. Collision-free paths executed at the control level are generated by path planning algorithms.We propose a path replanning framework extending the existing path planners to allow dynamic repair of flight paths when new obstacles or no-fly zones obstructing the current flight path are detected. Additionally, a novel approach to selecting the best path repair strategy based on machine learning technique is presented. A prerequisite for a safe navigation in a real-world environment is an accurate geometrical model. As a step towards building accurate 3D models onboard UASs initial work on the integration of a laser range finder with a helicopter platform is also presented.

    Combination of the techniques presented provides another step towards building comprehensive and robust navigation systems for future UASs.

  • 1772.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys.
    Algorithms for Applied Digital Image Cytometry2003Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Image analysis can provide genetic as well as protein level information from fluorescence stained fixed or living cells without loosing tissue morphology. Analysis of spatial, spectral, and temporal distribution of fluorescence can reveal important information on the single cell level. This is in contrast to most other methods for cell analysis, which do not account for inter-cellular variation. Flow cytometry enables single-cell analysis, but tissue morphology is lost in the process, and temporal events cannot be observed.

    The need for reproducibility, speed and accuracy calls for computerized methods for cell image analysis, i.e., digital image cytometry, which is the topic of this thesis.

    Algorithms for cell-based screening are presented and applied to evaluate the effect of insulin on translocation events in single cells. This type of algorithms could be the basis for high-throughput drug screening systems, and have been developed in close cooperation with biomedical industry.

    Image based studies of cell cycle proteins in cultured cells and tissue sections show that cyclin A has a well preserved expression pattern while the expression pattern of cyclin E is disturbed in tumors. The results indicate that analysis of cyclin E expression provides additional valuable information for cancer prognosis, not visible by standard tumor grading techniques.

    Complex chains of events and interactions can be visualized by simultaneous staining of different proteins involved in a process. A combination of image analysis and staining procedures that allow sequential staining and visualization of large numbers of different antigens in single cells is presented. Preliminary results show that at least six different antigens can be stained in the same set of cells.

    All image cytometry requires robust segmentation techniques. Clustered objects, background variation, as well as internal intensity variations complicate the segmentation of cells in tissue. Algorithms for segmentation of 2D and 3D images of cell nuclei in tissue by combining intensity, shape, and gradient information are presented.

    The algorithms and applications presented show that fast, robust, and automatic digital image cytometry can increase the throughput and power of image based single cell analysis.

  • 1773.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    High throughput phenotyping of model organisms2012Ingår i: BioImage Informatics 2012 / [ed] Fuhui Long, Ivo F. Sbalzarini, Pavel Tomancak and Michael Unser, Dresden, Germany, 2012, s. 45-45Konferensbidrag (Refereegranskat)
    Abstract [en]

    Microscopy has emerged as one of the most powerful and informative ways to analyze cell-based high-throughput screening samples in experiments designed to uncover novel drugs and drug targets. However, many diseases and biological pathways can be better studied in whole animals – particularly diseases that involve organ systems and multi-cellular interactions, such as metabolism, infection, vascularization, and development. Two model organisms compatible with high-throughput phenotyping are the 1mm long round worm C. elegans and the transparent embryo of zebrafish (Danio rerio). C. elegans is tractable as it can be handled using similar robotics, multi-well plates, and flow-sorting systems as are used for high-throughput screening of cells. The worm is also transparent throughout its lifecycle and is attractive as a model for genetic functions as its genes can be turned off by RNA-interference. Zebrafish embryos have also proved to be a vital model organism in many fields of research, including organismal development, cancer, and neurobiology. Zebrafish, being vertebrates, exhibit features common to phylogenetically higher organisms such as a true vasculature and central nervous system.

     

    Basically any phenotypic change that can be visually observed (in untreated or stained worms and fish) can also be imaged. However, visual assessment of phenotypic variation is tedious and prone to error as well as observer bias. Screening in high throughput limits image resolution and time-lapse information. Still, the images are typically rich in information and the number of images for a standard screen often exceeds 100 000, ruling out visual inspection. Generation of automated image analysis platforms will increase the throughout of data analysis, improve the robustness of phenotype scoring, and allow for reliable application of statistical metrics for evaluating assay performance and identifying active compounds.

     

    We have developed a platform for automated analysis of C. elegans assays, and are currently developing tools for analysis of zebrafish embryos. Our worm analysis tools, collected in the WormToolbox, can identify individual worms also as they cross and overlap, and quantify a large number of features, including mapping of reporter protein expression patterns to the worm anatomy. We have evaluated the tools on screens for novel treatments of infectious disease and genetic perturbations affecting fat metabolism. The WormToolbox is part of the free and open source CellProfiler software, also including methods for image assay quality control and feature selection by machine learning.

  • 1774.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för genetik och patologi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Image analysis in fluorescence microscopy: the human eye is not enough2008Ingår i: Medicinteknikdagar 2008: Mötesplats för aktörer inom forskning, sjukvård och industri, Proceedings, Medicinteknikdagarna 2008: Nils Löfgren, Högskolan i Borås , 2008Konferensbidrag (Övrig (populärvetenskap, debatt, mm))
    Abstract [en]

     

     

     

  • 1775.
    Wählby, Carolina
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Bengtsson, Ewert
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Segmentation of cell nuclei in tissue by combining seeded watersheds with gradient information2003Ingår i: Proceedings of SCIA-03: Scandinavian Conference on Image Analysis, 2003, s. 408-414Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper deals with the segmentation of cell nuclei in tissue. We present a region-based segmentation method where seeds representing object- and background-pixels are created by morphological filtering. The seeds are then used as a starting point for watershed segmentation of the gradient magnitude of the original image. Over-segmented objects are thereafter merged based on the gradient magnitude between the adjacent objects. The method was tested on a total of 726 cell nuclei in 7 images, and 95% correct segmentation was achieved.

  • 1776.
    Wählby, Carolina
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Bengtsson, Ewert
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Watershed techniques for segmentation in image cytometry2003Ingår i: Proceedings of the 1st International Cytomics Conference: Newport, Wales, United Kingdom, 2003Konferensbidrag (Övrigt vetenskapligt)
  • 1777.
    Wählby, Carolina
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Erlandsson, Fredrik
    Bengtsson, Ewert
    Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Zetterberg, Anders
    Sequential immunofluorescence staining and image analysis for detection of large numbers of antigens in individual cell nuclei2002Ingår i: Cytometry, ISSN 0196-4763, Vol. 47, nr 1, s. 32-41Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Background

    Visualization of more than one antigen by multicolor immunostaining is often desirable or even necessary to explore spatial and temporal relationships of functional significance. Previously presented staining protocols have been limited to the visualization of three or four antigens.

    Methods

    Immunofluorescence staining was performed both on slices of formalin-fixed tissue and on cells microscopy. The primary and secondary antibodies, as well as the fluorophores, were thereafter removed using a combination of denaturation and elution techniques. After removal of the fluorescence stain, a new immunofluorescence staining was performed, visualizing a new set of antigens. The procedure was repeated up to three times. A method for image registration combined with segmentation, extraction of data, and cell classification was developed for efficient and objective analysis of the image data.

    Results

    The results show that immunofluorescence stains in many cases can be repeatedly removed without major effects on the antigenicity of the sample.

    Conclusions

    The concentration of at least six different antigens in each cell can thus be measured semiquantitatively using sequential immunofluorescence staining and the described image analysis techniques. The number of antigens that can be visualized in a single sample is considerably increased by the presented protocol.

  • 1778.
    Wählby, Carolina
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Erlandsson, Fredrik
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Zetterberg, Anders
    Bengtsson, Ewert
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Analysis of cells using image data from sequential immunofluorescence staining experiments2001Ingår i: 5th Korea-Germany Joint Workshop on Advanced Medical Image Processing, Seoul, Korea, 2001Konferensbidrag (Övrigt vetenskapligt)
  • 1779.
    Wählby, Carolina
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Erlandsson, Fredrik
    Nyberg, Karl
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Zetterberg, Anders
    Bengtsson, Ewert
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Multiple tissue antigen analysis by sequential immunofluorescence staining and multi-dimensional image analysis2001Ingår i: Proceedings of SCIA-01 (Scandinavian Conference on Image Analysis), 2001, s. 25-31Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a novel method for sequential immunofluorescence staining, which, in combination with 3D image registration and segmentation, can be used to increase the number of antigens that can be observed simultaneously in single cells in tissue sections. Visualization of more than one antigen by multicolor immunostaining is often desirable or even necessary, both for quantitative studies and to explore spatial relationships of functional significance. Sequential staining, meaning repeated application and removal of fluorescence markers, greatly increases the number of different antigens that can be visualized and quantified in single cells using digital imaging fluorescence microscopy. Quantification and efficient objective analysis of the image data requires digital image analysis. A method for 3D image registration combined with 2D and 3D segmentation and 4D extraction of data is described.

  • 1780.
    Wählby, Carolina
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Erlandsson, Fredrik
    Zetterberg, Anders
    Bengtsson, Ewert
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Multi-dimensional image analysis of sequential immunofluorescence staining2001Ingår i: 7th European Society for Analytical Cellular Pathology Congress (ESACP 2001), Caen, France, 2001, s. 61-Konferensbidrag (Övrigt vetenskapligt)
  • 1781.
    Wählby, Carolina
    et al.
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för genetik och patologi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys.
    Karlsson, Patrick
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Henriksson, Sara
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för genetik och patologi.
    Larsson, Chatarina
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för genetik och patologi.
    Nilsson, Mats
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för genetik och patologi.
    Bengtsson, Ewert
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Finding cells, finding molecules, finding patterns2008Ingår i: International Journal of Signal and Imaging Systems Engineering, ISSN 1748-0698, Vol. 1, nr 1, s. 11-17Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Many modern molecular labelling techniques result in bright point signals. Signals from molecules that are detected directly inside a cell can be captured by fluorescence microscopy. Signals representing different types of molecules may be randomly distributed in the cells or show systematic patterns, indicating that the corresponding molecules have specific, non-random localisations and functions in the cell. Assessing this information requires high speed robust image segmentation followed by signal detection, and finally, pattern analysis. We present and discuss these types of methods and show an example of how the distribution of different variants of mitochondrial DNA can be analysed.

  • 1782.
    Wählby, Carolina
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Karlsson, Patrick
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Thorlin, Thorleif
    Althoff, Karin
    Degerman, Johan
    Bengtsson, Ewert
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Gustavsson, Tomas
    Time-lapse microscopy and image analysis for tracking stem cell migration2004Ingår i: Proceedings of the Swedish Symposium on Image Analysis SSBA 2004, 2004, s. 118-121Konferensbidrag (Övrigt vetenskapligt)
  • 1783.
    Wählby, Carolina
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Nyström, Ingela
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Bengtsson, Ewert
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Robust methods for image segmentation and measurements.2003Ingår i: Proceedings for Modern Methods for Quantitative Metallography, 2003Konferensbidrag (Refereegranskat)
  • 1784.
    Wählby, Carolina
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Sintorn, Ida-Maria
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Erlandsson, Fredrik
    Borgefors, Gunilla
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Bengtsson, Ewert
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Combining intensity, edge, and shape information for 2D and 3D segmentation of cell nuclei in tissue sections2004Ingår i: Journal of Microscopy, ISSN 0022-2720, E-ISSN 1365-2818, Vol. 215, nr 1, s. 67-76Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a region-based segmentation method in which seeds representing both object and background pixels are created by combining morphological filtering of both the original image and the gradient magnitude of the image. The seeds are then used as starting points for watershed segmentation of the gradient magnitude image. The fully automatic seeding is done in a generous fashion, so that at least one seed will be set in each foreground object. If more than one seed is placed in a single object, the watershed segmentation will lead to an initial over-segmentation, i.e. a boundary is created where there is no strong edge. Thus, the result of the initial segmentation is further refined by merging based on the gradient magnitude along the boundary separating neighbouring objects. This step also makes it easy to remove objects with poor contrast. As a final step, clusters of nuclei are separated, based on the shape of the cluster. The number of input parameters to the full segmentation procedure is only five. These parameters can be set manually using a test image and thereafter be used on a large number of images created under similar imaging conditions. This automated system was verified by comparison with manual counts from the same image fields. About 90% correct segmentation was achieved for two- as well as three-dimensional images.

  • 1785.
    Wählby (née Linnman), Carolina
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Vondrus, Mikael
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Jarkrans, Torsten
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Bengtsson, Ewert
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Björkesten, Lennart
    Automatic cytoplasm segmentation of fluorescence labelled cells2000Ingår i: Symposium on Image Analysis - SSAB 2000, 2000, s. 29-32Konferensbidrag (Övrigt vetenskapligt)
  • 1786.
    Wälivaara, Marcus
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    General Object Detection Using Superpixel Preprocessing2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The objective of this master’s thesis work is to evaluate the potential benefit of a superpixel preprocessing step for general object detection in a traffic environment. The various effects of different superpixel parameters on object detection performance, as well as the benefit of including depth information when generating the superpixels are investigated.

    In this work, three superpixel algorithms are implemented and compared, including a proposal for an improved version of the popular Spectral Linear Iterative Clustering superpixel algorithm (SLIC). The proposed improved algorithm utilises a coarse-to-fine approach which outperforms the original SLIC for high-resolution images. An object detection algorithm is also implemented and evaluated. The algorithm makes use of depth information obtained by a stereo camera to extract superpixels corresponding to foreground objects in the image. Hierarchical clustering is then applied, with the segments formed by the clustered superpixels indicating potential objects in the input image.

    The object detection algorithm managed to detect on average 58% of the objects present in the chosen dataset. It performed especially well for detecting pedestrians or other objects close to the car. Altering the density distribution of the superpixels in the image yielded an increase in detection rate, and could be achieved both with or without utilising depth information. It was also shown that the use of superpixels greatly reduces the amount of computations needed for the algorithm, indicating that a real-time implementation is feasible.

  • 1787.
    Xiao, Yi
    et al.
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT, Australia.
    Pham, Tuan D
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT, Australia.
    Jia, Xiuping
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT, Australia.
    Zhou, Xiaobo
    Centre for Biotechnology and Informatics, The Methodist Hospital Research Institute & Cornell University, Houston, TX, USA.
    Yan, Hong
    Department of Electronic Engineering, City University of Hong Kong, Hong Kong.
    Correlation-based cluster-space transform for major adverse cardiac event prediction2010Ingår i: IEEE International Conference on Systems Man and Cybernetics (SMC), Institute of Electrical and Electronics Engineers (IEEE), 2010, s. 2003-2007Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper investigates the affect of variation of patterns in protein profiles to the identification of disease-specific biomarkers. A correlation-based cluster-space transform is applied to mass spectral data for predicting major adverse cardiac events (MACE). Training and testing data are transformed into cluster spaces by correlation distance based clustering, respectively. Data in the testing cluster that falls into a pair of training clusters is classified by a supervised classifier. Experiment results have shown that proteomic spectra of MACE which vary with certain patterns could be separated by the correlation-based clustering. The cluster-space transform allows better classification accuracy than single-clustered class method for separating disease and healthy samples.

  • 1788.
    Yan, Jeff
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Databas och informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Bourquard, Aurelien
    MIT, MA 02139 USA.
    POSTER: Who was Behind the Camera? - Towards Some New Forensics2017Ingår i: CCS17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASSOC COMPUTING MACHINERY , 2017, s. 2595-2597Konferensbidrag (Refereegranskat)
    Abstract [en]

    We motivate a new line of image forensics, and propose a novel approach to photographer identification, a rarely explored authorship attribution problem. A preliminary proof-of-concept study shows the feasibility of our method. Our contribution is a forensic method for photographer de-anonymisation, and the method also imposes a novel privacy threat.

  • 1789. Yan, Xiaoyong
    et al.
    Minnhagen, Petter
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för fysik.
    Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings2015Ingår i: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, nr 5, artikel-id e0125592Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.

  • 1790. Yang, N.
    et al.
    Wang, W. -X
    KTH.
    Wang, F. -P
    Xue, B. -Y
    Wang, K.
    KTH.
    Road information change detection based on fractional integral and neighborhood FCM2018Ingår i: Chang'an Daxue Xuebao (Ziran Kexue Ban)/Journal of Chang'an University (Natural Science Edition), ISSN 1671-8879, Vol. 38, nr 2, s. 103-111Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In order to improve the accuracy of road information change detection, a new road information change detection method based on fractional integral and spatial neighborhood fuzzy C-means (FCM) algorithm was presented. Firstly, a new difference image was generated by the gray difference calculation of the dual phase remote sensing images after registration and geometric correction. Then, a smaller fractional integral order was used to construct the denoising image mask with eight directions on the upper and lower, left and right, and four diagonals, and the fractional integral calculation were applied to the difference images, which improved the image signal-to-noise ratio (SNR) while preserving the edge and texture details of the image. Finally, the FCM clustering method combined with neighborhood spatial information was used to calculate the difference image after denoising. The highest and lowest points of the difference image gray values were selected as the center point of cluster initialization. The Euclidean Metric of the neighborhood were used to depict different weight values, so as to characterize the influence degree of domain pixels on central pixels and eliminate invalid isolated points. Detecting probability, false alarm rate and missed alarm rate of the algorithm were evaluated by the experiment. The results show that FCM road information change detection method based on fractional integral and neighborhood spatial information can effectively extract road change information. When the integral fractional order is 0.2, the FCM smoothing parameter is 2.5, the detection probability is higher than the comparison algorithm by 18% to 46%, the false alarm rate is lower than the comparison algorithm by 15% to 38%, and the missed alarm rate is lower than the comparison algorithm by 3% to 7%. The present algorithm can achieve better results in suppressing noise information and enhancing texture details. Especially, when the center pixel is noise, due to the introduction of neighborhood information, and it is affected by the neighborhood normal pixels. The proposed method could avoid misclassification by adjusting the membership automatically, it can effectively suppress the influence of neighborhood noise points on the normal pixel classification, and reduce the false alarm rate. 2 tabs, 4 figs, 28 refs. 

  • 1791. Yeh, T.
    et al.
    Tollmar, Konrad
    MIT CSAIL, Cambridge.
    Darrell, T.
    Searching the Web with mobile images for location recognition2004Ingår i: PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 2, 2004, Vol. 2, s. 76-81Konferensbidrag (Refereegranskat)
    Abstract [en]

    We describe an approach to recognizing location from mobile devices using image-based Web search. We demonstrate the usefulness of common image search metrics applied on images captured with a camera-equipped mobile device to find matching images on the World Wide Web or other general-purpose databases. Searching the entire Web can be computationally overwhelming, so we devise a hybrid image-and-keyword searching technique. First, image-search is performed over images and links to their source Web pages in a database that indexes only a small fraction of the Web. Then, relevant keywords on these Web pages are automatically identified and submitted to an existing text-based search engine (e.g. Google) that indexes a much larger portion of the Web. Finally, the resulting image set is filtered to retain images close to the original query. It is thus possible to efficiently search hundreds of millions of images that are not only textually related but also visually relevant. We demonstrate our approach on an application allowing users to browse Web pages matching the image of a nearby location.

  • 1792. Yeh, Tom
    et al.
    Grauman, Kristen
    Tollmar, Konrad
    Darrell, Trevor
    A picture is worth a thousand keywords: image-based object search on a mobile platform2005Ingår i: CHI ’05 extended abstracts on Human factors in computing systems, 2005, s. 2025-2028Konferensbidrag (Refereegranskat)
    Abstract [en]

    Finding information based on an object’s visual appearance is useful when specific keywords for the object are not known. We have developed a mobile image-based search system that takes images of objects as queries and finds relevant web pages by matching them to similar images on the web. Image-based search works well when matching full scenes, such as images of buildings or landmarks, and for matching objects when the boundary of the object in the image is available. We demonstrate the effectiveness of a simple interactive paradigm for obtaining a segmented object boundary, and show how a shape-based image matching algorithm can use the object outline to find similar images on the web.

  • 1793. Yeh, Tom
    et al.
    Tollmar, Konrad
    Darrell, Trevor
    IDeixis: image-based Deixis for finding location-based information2004Ingår i: CHI ’04 extended abstracts on Human factors in computing systems, 2004, s. 781-782Konferensbidrag (Refereegranskat)
    Abstract [en]

    We demonstrate an image-based approach to specifying location and finding location-based information from camera-equipped mobile devices. We introduce a point-by-photograph paradigm, where users can specify a location simply by taking pictures. Our technique uses content-based image retrieval methods to search the web or other databases for matching images and their source pages to find relevant location-based information. In contrast to conventional approaches to location detection, our method can refer to distant locations and does not require any physical infrastructure beyond mobile internet service. We have developed a prototype on a camera phone and conducted user studies to demonstrate the efficacy of our approach compared to other alternatives.

  • 1794.
    Yu, Donggang
    et al.
    University of Newcastle, NSW 2308, Australia.
    Jin, Jesse S
    University of Newcastle, NSW 2308, Australia.
    Luo, Suhuai
    University of Newcastle, NSW 2308, Australia.
    Lai, Wei
    University of Technology Hawthorn, VIC3122, Australia.
    Park, Mira
    University of Newcastle, NSW 2308, Australia.
    Pham, Tuan D
    The University of New South Wales Canberra,ACT2600,Australia.
    Shape analysis and recognition based on skeleton and morphological structure2010Ingår i: 5th European Conference onColour in Graphics, Imaging, and Vision12th International Symposium onMultispectral Colour Science, 2010, s. 118-123Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a novel and effective method of shape analysis and recognition based on skeleton and morphological structure. A series of preprocessing algorithms, smooth following and liberalization are introduced, and series of morphological structural points of image contour are extracted and merged. A series of basic shapes and a main shape of object image are described and segmented based on skeleton and morphological structure. Object shape is efficiently analyzed and recognized based on the extracted series of basic shapes and main shape. Comparing with other methods, the proposed method need not sample training set. Also, the new method can be used to analyze and recognize the shape structure of any shape, and there is no any requirement for the processed image data set. The new method can be used in image analysis, intelligent recognition, techniques, applications, systems and tools.

  • 1795.
    Yu, Donggang
    et al.
    University of Newcastle, NSW, Australia.
    Jin, Jesse S
    University of Newcastle, NSW, Australia.
    Luo, Suhuai
    University of Newcastle, NSW, Australia.
    Pham, Tuan D
    The University of New South Wales, Canberra, ACT, Australia.
    Lai, Wei
    Swinburne University of Technology, Hawthorn, VIC, Australia.
    Description, Recognition and Analysis of Biological Images2010Ingår i: CP1210, 2009 International Symposium on Computational Models for Life Sciences (CMLS ’09) / [ed] Tuan Pham; Xiaobo Zhou, American Institute of Physics (AIP), 2010, Vol. 1210, s. 23-42Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell‐cycle images.

  • 1796.
    Yu, Donggang
    et al.
    Bioinformatics Applications Research Centre, James Cook University, Australia.
    Pham, Tuan D
    Bioinformatics Applications Research Centre, James Cook University, Australia).
    Image Pattern Recognition-Based Morphological Structure and Applications2008Ingår i: Pattern recognition technologies and applications : recent advances / [ed] Brijesh Verma; Michael Blumenstein, 2008, s. 48-Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This chapter describes a new pattern recognition method: pattern recognition-based morphological structure. First, smooth following and linearization are introduced based on difference chain codes. Second, morphological structural points are described in terms of smooth followed contours and linearized lines, and then the patterns of morphological structural points and their properties are given. Morphological structural points are basic tools for pattern recognitionbased morphological structure. Furthermore, we discuss how the morphological structure can be used to recognize and classify images. One application is document image processing and recognition, analysis and recognition of broken handwritten digits. Another one is dynamic analysis and recognition of cell-cycle screening based on morphological structures. Finally, a conclusion is given, including advantages, disadvantages, and future research.

  • 1797.
    Yu, Donggang
    et al.
    James Cook University, Townsville, QLD 4811, Australia .
    Pham, Tuan D
    James Cook University, Townsville, QLD 4811, Australia .
    Yan, Hong
    City University of Hong Kong, Kowloon, Hong Kong .
    Lai, Wei
    Swinburne University of Technology, Melborne, VIC 3122, Australia .
    Crane, Denis I
    Griffith University, Nathan, Qld 4111, Australia .
    Segmentation and reconstruction of cultured neuron skeleton2007Ingår i: COMPUTATIONAL MODELS FOR LIFE SCIENCES—CMLS’07, 2007, Vol. 952, s. 21-30Konferensbidrag (Refereegranskat)
    Abstract [en]

    One approach to investigating neural death is through systematic studies of the changing morphology of cultured brain neurons in response to cellular challenges. Image segmentation and reconstruction methods developed to date to analyze such changes have been limited by the low contrast of cells. In this paper we present new algorithms that successfully circumvent these problems. The binary method is based on logical analysis of grey and distance difference of images. The spurious regions are detected and removed through use of a hierarchical window filter. The skeletons of binary cell images are extracted. The extension direction and connection points of broken cell skeletons are automatically determined, and broke neural skeletons are reconstructed. The spurious strokes are deleted based on cell prior knowledge. The efficacy of the developed algorithms is demonstrated here through a test of cultured brain neurons from newborn mice.

  • 1798.
    Yu, Lu
    et al.
    Northwestern Polytech Univ, Peoples R China; Univ Autonoma Barcelona, Spain.
    Zhang, Lichao
    Univ Autonoma Barcelona, Spain.
    van de Weijer, Joost
    Univ Autonoma Barcelona, Spain.
    Khan, Fahad
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Cheng, Yongmei
    Northwestern Polytech Univ, Peoples R China.
    Alejandro Parraga, C.
    Univ Autonoma Barcelona, Spain.
    Beyond Eleven Color Names for Image Understanding2018Ingår i: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 29, nr 2, s. 361-373Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Color description is one of the fundamental problems of image understanding. One of the popular ways to represent colors is by means of color names. Most existing work on color names focuses on only the eleven basic color terms of the English language. This could be limiting the discriminative power of these representations, and representations based on more color names are expected to perform better. However, there exists no clear strategy to choose additional color names. We collect a dataset of 28 additional color names. To ensure that the resulting color representation has high discriminative power we propose a method to order the additional color names according to their complementary nature with the basic color names. This allows us to compute color name representations with high discriminative power of arbitrary length. In the experiments we show that these new color name descriptors outperform the existing color name descriptor on the task of visual tracking, person re-identification and image classification.

  • 1799. Yuan, Qilong
    et al.
    Chen, I-Ming
    Lembono, Teguh Santoso
    Landén, Simon Nelson
    KTH, Skolan för industriell teknik och management (ITM).
    Malmgren, Victor
    KTH, Skolan för industriell teknik och management (ITM).
    Strategy for robot motion and path planning in robot taping2016Ingår i: FRONTIERS OF MECHANICAL ENGINEERING, ISSN 2095-0233, Vol. 11, nr 2, s. 195-203Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Covering objects with masking tapes is a common process for surface protection in processes like spray painting, plasma spraying, shot peening, etc. Manual taping is tedious and takes a lot of effort of the workers. The taping process is a special process which requires correct surface covering strategy and proper attachment of the masking tape for an efficient surface protection. We have introduced an automatic robot taping system consisting of a robot manipulator, a rotating platform, a 3D scanner and specially designed taping end- effectors. This paper mainly talks about the surface covering strategies for different classes of geometries. The methods and corresponding taping tools are introduced for taping of following classes of surfaces: Cylindrical/ extended surfaces, freeform surfaces with no grooves, surfaces with grooves, and rotational symmetrical surfaces. A collision avoidance algorithm is introduced for the robot taping manipulation. With further improvements on segmenting surfaces of taping parts and tape cutting mechanisms, such taping solution with the taping tool and the taping methodology can be combined as a very useful and practical taping package to assist humans in this tedious and time costly work.

  • 1800.
    Yuan, Weihao
    et al.
    Hong Kong Univ Sci & Technol, ECE, Robot Inst, Hong Kong, Peoples R China..
    Hang, Kaiyu
    Yale Univ, Mech Engn & Mat Sci, New Haven, CT USA..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Wang, Michael Y.
    Hong Kong Univ Sci & Technol, ECE, Robot Inst, Hong Kong, Peoples R China..
    Stork, Johannes A.
    Orebro Univ, Ctr Appl Autonomous Sensor Syst, Orebro, Sweden..
    End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer2019Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 119, s. 119-134Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires in the order of hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real setting using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real world application, even when the camera pose is different from simulation. Additionally, we show that the learned system not only can provide adaptive behavior to handle unforeseen events during executions, such as distraction objects, sudden changes in positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.

333435363738 1751 - 1800 av 1879
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf