Change search
Refine search result
3031323334 1601 - 1650 of 1668
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1601. Yuan, Qilong
    et al.
    Chen, I-Ming
    Lembono, Teguh Santoso
    Landén, Simon Nelson
    KTH, School of Industrial Engineering and Management (ITM).
    Malmgren, Victor
    KTH, School of Industrial Engineering and Management (ITM).
    Strategy for robot motion and path planning in robot taping2016In: FRONTIERS OF MECHANICAL ENGINEERING, ISSN 2095-0233, Vol. 11, no 2, p. 195-203Article in journal (Refereed)
    Abstract [en]

    Covering objects with masking tapes is a common process for surface protection in processes like spray painting, plasma spraying, shot peening, etc. Manual taping is tedious and takes a lot of effort of the workers. The taping process is a special process which requires correct surface covering strategy and proper attachment of the masking tape for an efficient surface protection. We have introduced an automatic robot taping system consisting of a robot manipulator, a rotating platform, a 3D scanner and specially designed taping end- effectors. This paper mainly talks about the surface covering strategies for different classes of geometries. The methods and corresponding taping tools are introduced for taping of following classes of surfaces: Cylindrical/ extended surfaces, freeform surfaces with no grooves, surfaces with grooves, and rotational symmetrical surfaces. A collision avoidance algorithm is introduced for the robot taping manipulation. With further improvements on segmenting surfaces of taping parts and tape cutting mechanisms, such taping solution with the taping tool and the taping methodology can be combined as a very useful and practical taping package to assist humans in this tedious and time costly work.

  • 1602.
    Zagal, Juan Cristobal
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Björkman, Eva
    Lindeberg, Tony
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Roland, P.
    Signficance determination for the scale-space primal sketch by comparison of statistics of scale-space blob volumes computed from PET signals vs. residual noise2000In: : HBM'00 published in Neuroimage, volume 11, number 5, 2000, 2000, Vol. 11, p. 493-493Conference paper (Refereed)
    Abstract [en]

    A dominant approach to brain mapping is to define functional regions in the brain by analyzing brain activation images obtained by PET or fMRI. In [1], it has been shown that the scale-space primal sketch provides a useful tool for such analysis. Some attractive properties of this method are that it only makes few assumptions about the data and the process for extracting activations is fully automatic.

    In the present version of the scale-space primal sketch, however, there is no method for determining p-values. The purpose here is to present a new methodology for addressing this question, by introducing a descriptor referred to as the -curve, which serves as a first step towards determining the probability of false positives, i.e. alpha.

  • 1603.
    Zhang, Bailing
    et al.
    Xi'an Jiaotong-Liverpool University .
    Pham, Tuan D
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT 2600, Australia. A.
    Multiple features based two-stage hybrid classifier ensembles for subcellular phenotype images classification2010In: International Journal of Biometrics and Bioinformatics (IJBB), ISSN 1985-2347, Vol. 4, no 5, p. 176-193Article in journal (Refereed)
    Abstract [en]

    Subcellular localization is a key functional characteristic of proteins. As an interesting ``bio-image informatics\'\' application, an automatic, reliable and efficient prediction system for protein subcellular localization can be used for establishing knowledge of the spatial distribution of proteins within living cells and permits to screen systems for drug discovery or for early diagnosis of a disease. In this paper, we propose a two-stage multiple classifier system to improve classification reliability by introducing rejection option. The system is built as a cascade of two classifier ensembles. The first ensemble consists of set of binary SVMs which generalizes to learn a general classification rule and the second ensemble, which also include three distinct classifiers, focus on the exceptions rejected by the rule. A new way to induce diversity for the classifier ensembles is proposed by designing classifiers that are based on descriptions of different feature patterns. In addition to the Subcellular Location Features (SLF) generally adopted in earlier researches, three well-known texture feature descriptions have been applied to cell phenotype images, which are the local binary patterns (LBP), Gabor filtering and Gray Level Coocurrence Matrix (GLCM). The different texture feature sets can provide sufficient diversity among base classifiers, which is known as a necessary condition for improvement in ensemble performance. Using the public benchmark 2D HeLa cell images, a high classification accuracy 96% is obtained with rejection rate $21\\%$ from the proposed system by taking advantages of the complementary strengths of feature construction and majority-voting based classifiers\' decision fusions.

  • 1604.
    Zhang, Chao
    et al.
    School of Engineering and Information Technology, The University of New South Wales, Canberra, Australia; CSIRO Computational Informatics, North Ryde, Australia.
    Sun, Changming
    CSIRO Computational Informatics, North Ryde, Australia.
    Su, Ran
    Bioinformatics Institute, Matrix, Singapore.
    Pham, Tuan D
    Aizu Research Cluster for Medical Engineering and Informatics, Research Center for Advanced Information Science and Technology, The University of Aizu, Fukushima, Japan.
    Clustered nuclei splitting via curvature information and gray-scale distance transform2015In: Journal of Microscopy, ISSN 0022-2720, E-ISSN 1365-2818, Vol. 259, no 1, p. 36-52Article in journal (Refereed)
    Abstract [en]

    Clusters or clumps of cells or nuclei are frequently observed in two dimensional images of thick tissue sections. Correct and accurate segmentation of overlapping cells and nuclei is important for many biological and biomedical applications. Many existing algorithms split clumps through the binarization of the input images; therefore, the intensity information of the original image is lost during this process. In this paper, we present a curvature information, gray scale distance transform, and shortest path splitting line-based algorithm which can make full use of the concavity and image intensity information to find out markers, each of which represents an individual object, and detect accurate splitting lines between objects using shortest path and junction adjustment. The proposed algorithm is tested on both synthetic and real nuclei images. Experiment results show that the performance of the proposed method is better than that of marker-controlled watershed method and ellipse fitting method.

  • 1605.
    Zhang, Chao
    et al.
    School of Engineering and Information Technology, The University of New South Wales, Canberra ACT, Australia.
    Sun, Changming
    CSIRO Mathematics, Informatics and Statistics, North Ryde, NSW, Australia.
    Su, Ran
    School of Engineering and Information Technology, The University of New South Wales, Canberra ACT, Australia.
    Pham, Tuan D
    Aizu Research Cluster for Medical Engineering and Informatics, Research Center for Advanced Information Science and Technology, The University of Aizu Aizu-Wakamatsu, Fukushima, Japan.
    Segmentation of clustered nuclei based on curvature weighting2012In: IVCNZ '12, Proceedings of the 27th Conference on Image and Vision Computing New Zealand, ACM Digital Library, 2012, p. 49-54Conference paper (Other academic)
    Abstract [en]

    Cluster of nuclei are frequently observed in thick tissue section images. It is very important to segment overlapping nuclei in many biomedical applications. Many existing methods tend to produce under segmented results when there is a high overlap rate. In this paper, we present a curvature weighting based algorithm which weights each pixel using the curvature information of its nearby boundaries to extract markers, each of which represents an object, from input images. Then we use marker-controlled watershed to obtain the final segmentation. Test results using both synthetic and real cell images are presented in the paper.

  • 1606.
    Zhang, Chao
    et al.
    The University of New South Wales, Canberra, ACT, Australia .
    Sun, Changming
    CSIRO Mathematics, Informatics and Statistics, NSW, Australia .
    Su, Ran
    The University of New South Wales, Canberra, ACT, Australia .
    Pham, Tuan D
    The University of Aizu, Fukushima, Japan .
    Segmentation of clustered nuclei based on curvature-weighting2012In: Proceedings of the 27th Conference on Image and Vision Computing New Zealand, 2012, p. 49-54Conference paper (Refereed)
    Abstract [en]

    Clusters of nuclei are frequently observed in thick tissue section images. It is very important to segment overlapping nuclei in many biomedical applications. Many existing methods tend to produce under segmented results when there is a high overlap rate. In this paper, we present a curvature weighting based algorithm which weights each pixel using the curvature information of its nearby boundaries to extract markers, each of which represents an object, from input images. Then we use marker-controlled watershed to obtain the final segmentation. Test results using both synthetic and real cell images are presented in the paper.

  • 1607.
    Zhang, Cheng
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Damianou, Andreas
    The University of Sheffield.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Factorized Topic Models2013Conference paper (Refereed)
    Abstract [en]

    In this paper we present a modification to a latent topic model, which makes themodel exploit supervision to produce a factorized representation of the observeddata. The structured parameterization separately encodes variance that is sharedbetween classes from variance that is private to each class by the introduction of anew prior over the topic space. The approach allows for a more efficient inferenceand provides an intuitive interpretation of the data in terms of an informative signaltogether with structured noise. The factorized representation is shown to enhanceinference performance for image, text, and video classification.

  • 1608.
    Zhang, Cheng
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gratal, Xavi
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Supervised Hierarchical Dirichlet Processes with Variational Inference2013In: 2013 IEEE International Conference on Computer Vision Workshops (ICCVW), IEEE , 2013, p. 254-261Conference paper (Refereed)
    Abstract [en]

    We present an extension to the Hierarchical Dirichlet Process (HDP), which allows for the inclusion of supervision. Our model marries the non-parametric benefits of HDP with those of Supervised Latent Dirichlet Allocation (SLDA) to enable learning the topic space directly from data while simultaneously including the labels within the model. The proposed model is learned using variational inference which allows for the efficient use of a large training dataset. We also present the online version of variational inference, which makes the method scalable to very large datasets. We show results comparing our model to a traditional supervised parametric topic model, SLDA, and show that it outperforms SLDA on a number of benchmark datasets.

  • 1609.
    Zhang, Cheng
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    How to Supervise Topic Models2014In: Computer Vision - ECCV 2014 Workshops: Zurich, Switzerland, September 6-7 and 12, 2014, Proceedings, Part II / [ed] Agapito, Bronstein, Rother, Zurich: Springer Publishing Company, 2014, p. 500-515Chapter in book (Refereed)
    Abstract [en]

    Supervised topic models are important machine learning tools whichhave been widely used in computer vision as well as in other domains. However,there is a gap in the understanding of the supervision impact on the model. Inthis paper, we present a thorough analysis on the behaviour of supervised topicmodels using Supervised Latent Dirichlet Allocation (SLDA) and propose twofactorized supervised topic models, which factorize the topics into signal andnoise. Experimental results on both synthetic data and real-world data for computer vision tasks show that supervision need to be boosted to be effective andfactorized topic models are able to enhance the performance.

  • 1610.
    Zhang, Cheng
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ek, C. H.
    Inter-battery topic representation learning2016In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, 2016, p. 210-226Conference paper (Refereed)
    Abstract [en]

    In this paper, we present the Inter-Battery Topic Model (IBTM). Our approach extends traditional topic models by learning a factorized latent variable representation. The structured representation leads to a model that marries benefits traditionally associated with a discriminative approach, such as feature selection, with those of a generative model, such as principled regularization and ability to handle missing data. The factorization is provided by representing data in terms of aligned pairs of observations as different views. This provides means for selecting a representation that separately models topics that exist in both views from the topics that are unique to a single view. This structured consolidation allows for efficient and robust inference and provides a compact and efficient representation. Learning is performed in a Bayesian fashion by maximizing a rigorous bound on the log-likelihood. Firstly, we illustrate the benefits of the model on a synthetic dataset. The model is then evaluated in both uni- and multi-modality settings on two different classification tasks with off-the-shelf convolutional neural network (CNN) features which generate state-of-the-art results with extremely compact representations.

  • 1611.
    Zhang, Cheng
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Song, Dan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Contextual Modeling with Labeled Multi-LDA2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, p. 2264-2271Conference paper (Refereed)
    Abstract [en]

    Learning about activities and object affordances from human demonstration are important cognitive capabilities for robots functioning in human environments, for example, being able to classify objects and knowing how to grasp them for different tasks. To achieve such capabilities, we propose a Labeled Multi-modal Latent Dirichlet Allocation (LM-LDA), which is a generative classifier trained with two different data cues, for instance, one cue can be traditional visual observation and another cue can be contextual information. The novel aspects of the LM-LDA classifier, compared to other methods for encoding contextual information are that, I) even with only one of the cues present at execution time, the classification will be better than single cue classification since cue correlations are encoded in the model, II) one of the cues (e.g., common grasps for the observed object class) can be inferred from the other cue (e.g., the appearance of the observed object). This makes the method suitable for robot online and transfer learning; a capability highly desirable in cognitive robotic applications. Our experiments show a clear improvement for classification and a reasonable inference of the missing data.

  • 1612.
    Zhang, Guangyun
    et al.
    The University of New South Wales, Campbell, ACT 2600, Australia .
    Jia, Xiuping
    The University of New South Wales, Campbell, ACT 2600, Australia .
    Pham, Tuan D
    The University of New South Wales, Campbell, ACT 2600, Australia .
    Crane, Denis I
    Eskitis Institute for Cell and Molecular Therapies, and School of Biomolecular and Physical Sciences, Griffith University, Nathan Campus, QLD 4111, Australia .
    Multistage spatial property based segmentation for quantification of fluorescence distribution in cells2010Conference paper (Refereed)
    Abstract [en]

    The interpretation of the distribution of fluorescence in cells is often by simple visualization of microscope‐derived images for qualitative studies. In other cases, however, it is desirable to be able to quantify the distribution of fluorescence using digital image processing techniques. In this paper, the challenges offluorescence segmentation due to the noise present in the data are addressed. We report that intensity measurements alone do not allow separation of overlapping data between target and background. Consequently, spatial properties derived from neighborhood profile were included. Mathematical Morphological operations were implemented for cell boundary extraction and a window based contrast measure was developed for fluorescence puncta identification. All of these operations were applied in the proposed multistage processing scheme. The testing results show that the spatial measures effectively enhance the target separability.

  • 1613.
    Zhang, Hanqing
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Digital holography and image processing methods for applications in biophysics2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Understanding dynamic mechanisms, morphology and behavior of bacteria are important to develop new therapeutics to cure diseases. For example, bacterial adhesion mechanisms are prerequisites for initiation of infections and for several bacterial strains this adhesion process is mediated by adhesive surface organelles, also known as fimbriae. Escherichia coli (E. coli) is a bacterium expressing fimbriae of which pathogenic strains can cause severe diseases in fluidic environments such as the urinary tract and intestine. To better understand how E. coli cells attach and remain attached to surfaces when exposed to a fluid flow using their fimbriae, experiments using microfluidic channels are important; and to assess quantitative information of the adhesion process and cellular information of morphology, location and orientation, the imaging capability of the experimental technique is vital.

    In-line digital holographic microscopy (DHM) is a powerful imaging technique that can be realized around a conventional light microscope. It is a non-invasive technique without the need of staining or sectioning of the sample to be observed in vitro. DHM provides holograms containing three-dimensional (3D) intensity and phase information of cells under study with high temporal and spatial resolution. By applying image processing algorithms to the holograms, quantitative measurements can provide information of position, shape, orientation, optical thickness of the cell, as well as dynamic cell properties such as speed, growing rate, etc.

    In this thesis, we aim to improve the DHM technique and develop image processing methods to track and assess cellular properties in microfluidic channels to shed light on bacterial adhesion and cell morphology. To achieve this, we implemented a DHM technique and developed image processing algorithms to provide for a robust and quantitative analysis of holograms. We improved the cell detection accuracy and efficiency in DHM holograms by developing an algorithm for detection of cell diffraction patterns. To improve the 3D detection accuracy using in-line digital holography, we developed a novel iterative algorithm that use multiple-wavelengths. We verified our algorithms using synthetic, colloidal and cell data and applied the algorithms for detecting, tracking and analysis. We demonstrated the performance when tracking bacteria with sub-micrometer accuracy and kHz temporal resolution, as well as how DHM can be used to profile a microfluidic flow using a large number of colloidal particles. We also demonstrated how the results of cell shape analysis based on image segmentation can be used to estimate the hydrodynamic force on tethered capsule-shaped cells in micro-fluidic flows near a surface.

  • 1614.
    Zhang, Hanqing
    et al.
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Stangner, Tim
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Wiklund, Krister
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Rodrigues, Alvaro
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Andersson, Magnus
    Umeå University, Faculty of Science and Technology, Department of Physics.
    UmUTracker: a versatile MATLAB program for automated particle tracking of 2D light microscopy or 3D digital holography data2017In: Computer Physics Communications, ISSN 0010-4655, E-ISSN 1879-2944, Vol. 219, p. 390-399Article in journal (Refereed)
    Abstract [en]

    We present a versatile and fast MATLAB program (UmUTracker) that automatically detects and tracks particles by analyzing video sequences acquired by either light microscopy or digital in-line holographic microscopy. Our program detects the 2D lateral positions of particles with an algorithm based on the isosceles triangle transform, and reconstructs their 3D axial positions by a fast implementation of the Rayleigh-Sommerfeld model using a radial intensity profile. To validate the accuracy and performance of our program, we first track the 2D position of polystyrene particles using bright field and digital holographic microscopy. Second, we determine the 3D particle position by analyzing synthetic and experimentally acquired holograms. Finally, to highlight the full program features, we profile the microfluidic flow in a 100 gm high flow chamber. This result agrees with computational fluid dynamic simulations. On a regular desktop computer UmUTracker can detect, analyze, and track multiple particles at 5 frames per second for a template size of 201 x 201 in a 1024 x 1024 image. To enhance usability and to make it easy to implement new functions we used object-oriented programming. UmUTracker is suitable for studies related to: particle dynamics, cell localization, colloids and microfluidic flow measurement.

    Program summary

    Program title: UmUTracker Program Files doi: http://dx.doi.org/10.17632/fkprs4s6xp.1

    Licensing provisions: Creative Commons by 4.0 (CC by 4.0)

    Programming language: MATLAB Nature of problem: 3D multi-particle tracking is a common technique in physics, chemistry and biology. However, in terms of accuracy, reliable particle tracking is a challenging task since results depend on sample illumination, particle overlap, motion blur and noise from recording sensors. Additionally, the computational performance is also an issue if, for example, a computationally expensive process is executed, such as axial particle position reconstruction from digital holographic microscopy data. Versatile robust tracking programs handling these concerns and providing a powerful post-processing option are significantly limited.

    Solution method: UmUTracker is a multi-functional tool to extract particle positions from long video sequences acquired with either light microscopy or digital holographic microscopy. The program provides an easy-to-use graphical user interface (GUI) for both tracking and post-processing that does not require any programming skills to analyze data from particle tracking experiments. UmUTracker first conduct automatic 2D particle detection even under noisy conditions using a novel circle detector based on the isosceles triangle sampling technique with a multi-scale strategy. To reduce the computational load for 3D tracking, it uses an efficient implementation of the Rayleigh-Sommerfeld light propagation model. To analyze and visualize the data, an efficient data analysis step, which can for example show 4D flow visualization using 3D trajectories, is included. Additionally, UmUTracker is easy to modify with user customized modules due to the object-oriented programming style.

    Additional comments: Program obtainable from https://sourceforge.net/projects/umutracker/

  • 1615.
    Zhao, Yuxin
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.
    Position Estimation in Uncertain Radio Environments and Trajectory Learning2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    To infer the hidden states from the noisy observations and make predictions based on a set of input states and output observations are two challenging problems in many research areas. Examples of applications many include position estimation from various measurable radio signals in indoor environments, self-navigation for autonomous cars, modeling and predicting of the traffic flows, and flow pattern analysis for crowds of people. In this thesis, we mainly use the Bayesian inference framework for position estimation in an indoor environment, where the radio propagation is uncertain. In Bayesian inference framework, it is usually hard to get analytical solutions. In such cases, we resort to Monte Carlo methods to solve the problem numerically. In addition, we apply Bayesian nonparametric modeling for trajectory learning in sport analytics.

    The main contribution of this thesis is to propose sequential Monte Carlo methods, namely particle filtering and smoothing, for a novel indoor positioning framework based on proximity reports. The experiment results have been further compared with theoretical bounds derived for this proximity based positioning system. To improve the performance, Bayesian non-parametric modeling, namely Gaussian process, has been applied to better indicate the radio propagation conditions. Then, the position estimates obtained sequentially using filtering and smoothing are further compared with a static solution, which is known as fingerprinting.

    Moreover, we propose a trajectory learning framework for flow estimation in sport analytics based on Gaussian processes. To mitigate the computation deficiency of Gaussian process, a grid-based on-line algorithm has been adopted for real-time applications. The resulting trajectory modeling for individual athlete can be used for many purposes, such as performance prediction and analysis, health condition monitoring, etc. Furthermore, we aim at modeling the flow of groups of athletes, which could be potentially used for flow pattern recognition, strategy planning, etc.

  • 1616.
    Zhong, Yang
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Hedman, Anders
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    How Good Can a Face Identifier Be Without Learning2016In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349Article in journal (Refereed)
    Abstract [en]

    Constructing discriminative features is an essential issue in developing face recognition algorithms. There are two schools in how features are constructed: hand-crafted features and learned features from data. A clear trend in the face recognition community is to use learned features to replace hand-crafted ones for face recognition, due to the superb performance achieved by learned features through Deep Learning networks. Given the negative aspects of database-dependent solutions, we consider an alternative and demonstrate that, for good generalization performance, developing face recognition algorithms by using handcrafted features is surprisingly promising when the training dataset is small or medium sized. We show how to build such a face identifier with our Block Matching method which leverages the power of the Gabor phase in face images. Although no learning process is involved, empirical results show that the performance of this “designed” identifier is comparable (superior) to state-of-the-art identifiers and even close to Deep Learning approaches.

  • 1617.
    Zhong, Yang
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Hedman, Anders
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    How good can a face identifier be without learning?2017In: 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2016, Springer, 2017, Vol. 693, p. 515-533Conference paper (Refereed)
    Abstract [en]

    Constructing discriminative features is an essential issue in developing face recognition algorithms. There are two schools in how features are constructed: hand-crafted features and learned features from data. A clear trend in the face recognition community is to use learned features to replace hand-crafted ones for face recognition, due to the superb performance achieved by learned features through Deep Learning networks. Given the negative aspects of database-dependent solutions, we consider an alternative and demonstrate that, for good generalization performance, developing face recognition algorithms by using hand-crafted features is surprisingly promising when the training dataset is small or medium sized. We show how to build such a face identifier with our Block Matching method which leverages the power of the Gabor phase in face images. Although no learning process is involved, empirical results show that the performance of this “designed” identifier is comparable (superior) to state-of-the-art identifiers and even close to Deep Learning approaches.

  • 1618. Zhong, Yang
    et al.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Is block matching an alternative tool to LBP for face recognition?2014Conference paper (Refereed)
    Abstract [en]

    In this paper, we introduce the Block Matching (BM) as an alternative patch-based local matching approach for solving the face recognition problem. The Block Matching enables an image patch of the probe face image to search for its best matching from displaced positions in the gallery face image. This matching strategy is very effective for handling spatial shift between two images and it is radically different from that of the widely used LBP type patch-based local matching approaches. Our evaluations on the FERET and CMU-PIE databases show that the performance of this simple method is well comparable (superior) to that of the popular LBP approach. We argue that the Block Matching could provide face recognition a new approach with more flexible algorithm architecture. One can expect that it could lead to much higher performance when combining with other feature extraction techniques, like Gabor wavelet and deep learning.

  • 1619.
    Zhong, Yang
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Leveraging Gabor Phase for Face Identification in Controlled Scenarios2016In: Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Science and Technology Publications,Lda , 2016, p. 49-58Conference paper (Refereed)
    Abstract [en]

    Gabor features have been widely employed in solving face recognition problems in controlled scenarios. To construct discriminative face features from the complex Gabor space, the amplitude information is commonly preferred, while the other one — the phase — is not well utilized due to its spatial shift sensitivity. In this paper, we address the problem of face recognition in controlled scenarios. Our focus is on the selection of a suitable signal representation and the development of a better strategy for face feature construction. We demonstrate that through our Block Matching scheme Gabor phase information is powerful enough to improve the performance of face identification. Compared to state of the art Gabor filtering based approaches, the proposed algorithm features much lower algorithmic complexity. This is mainly due to our Block Matching enables the employment of high definition Gabor phase. Thus, a single-scale Gabor frequency band is sufficient for discrimination. Furthermore, learning process is not involved in the facial feature construction, which avoids the risk of building a database-dependent algorithm. Benchmark evaluations show that the proposed learning-free algorith outperforms state-of-the-art Gabor approaches and is even comparable to Deep Learning solutions.

  • 1620.
    Zhong, Yang
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Face Attribute Prediction Using Off-The-Shelf CNN Features2016In: 2016 International Conference on Biometrics, ICB 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, article id 7550092Conference paper (Refereed)
    Abstract [en]

    Predicting attributes from face images in the wild is a challenging computer vision problem. To automatically describe face attributes from face containing images, traditionally one needs to cascade three technical blocks — face localization, facial descriptor construction, and attribute classification — in a pipeline. As a typical classification problem, face attribute preiction has been addressed using deep learning. Current state-of-the-art performance was achieved by using two cascaded Convolutional Neural Networks (CNNs), which were specifically trained to learn face localization and attribute description. In this paper, we experiment with an alternative way of employing the power of deep representations from CNNs. Combining with conventional face localization techniques, we use off-the-shelf architectures trained for face recognition to build facial descriptors. Recognizing that the describable face attributes are diverse, our face descriptors are constructed from different levels of the CNNs for different attributes to best facilitate face attribute prediction. Experiments on two large datasets, LFWA and CelebA, show that our approach is entirely comparable to the state-of-the-art. Our findings not only demonstrate an efficient face attribute prediction approach, but also raise an important question: how to leverage the power of off-the-shelf CNN representations for novel tasks

  • 1621.
    Zhong, Yang
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Leveraging Mid-level Deep Representations for Prediction Face Attributes in the Wild2016In: 2016 IEEE International Conference on Image Processing (ICIP), Institute of Electrical and Electronics Engineers (IEEE), 2016Conference paper (Refereed)
  • 1622.
    Zhong, Yang
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Transferring from Face Recognition to Face Attribute Prediction through Adaptive Selection of Off-the-shelf CNN RepresentationsManuscript (preprint) (Other academic)
  • 1623.
    Zhu, Peter
    Linköping University, Department of Science and Technology.
    Deblurring Algorithms for Out-of-focus Infrared Images2010Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    An image that has been subject to the out-of-focus phenomenon has reducedsharpness, contrast and level of detail depending on the amount of defocus. Torestore out-of-focused images is a complex task due to the information loss thatoccurs. However there exist many restoration algorithms that attempt to revertthis defocus by estimating a noise model and utilizing the point spread function.The purpose of this thesis, proposed by FLIR Systems, was to find a robustalgorithm that can restore focus and from the customer’s perspective be userfriendly. The thesis includes three implemented algorithms that have been com-pared to MATLABs built-in. Three image series were used to evaluate the limitsand performance of each algorithm, based on deblurring quality, implementationcomplexity, computation time and usability.Results show that the Alternating Direction Method for total variation de-convolution proposed by Tao et al. [29] together with its the modified discretecosines transform version restores the defocused images with the highest qual-ity. These two algorithms include features such as, fast computational time, fewparameters to tune and a powerful noise reduction.

  • 1624.
    Zins, Matthieu
    Linköping University, Department of Electrical Engineering, Computer Vision.
    Color Fusion and Super-Resolution for Time-of-Flight Cameras2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The recent emergence of time-of-flight cameras has opened up new possibilities in the world of computer vision. These compact sensors, capable of recording the depth of a scene in real-time, are very advantageous in many applications, such as scene or object reconstruction. This thesis first addresses the problem of fusing depth data with color images. A complete process to combine a time-of-flight camera with a color camera is described and its accuracy is evaluated. The results show that a satisfying precision is reached and that the step of calibration is very important.

    The second part of the work consists of applying super-resolution techniques to the time-of-flight camera in order to improve its low resolution. Different types of super-resolution algorithms exist but this thesis focuses on the combination of multiple shifted depth maps. The proposed framework is made of two steps: registration and reconstruction. Different methods for each step are tested and compared according to the improvements reached in term of level of details, sharpness and noise reduction. The results obtained show that Lucas-Kanade performs the best for the registration and that a non-uniform interpolation gives the best results in term of reconstruction. Finally, a few suggestions are made about future work and extensions for our solutions.

  • 1625.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Color separation for improved perceived image quality in terms of graininess and gamut2017In: Color Research and Application, ISSN 0361-2317, E-ISSN 1520-6378, Vol. 42, no 4, p. 486-497Article in journal (Refereed)
    Abstract [en]

    Multi-channel printing employs additional inks to improve the perceived image quality by reducing the graininess and augmenting the printer gamut. It also requires a color separation that deals with the one-to-many mapping problem imposed when using more than three inks. The proposed separation model incorporates a multilevel halftoning algorithm, reducing the complexity of the print characterization by grouping inks of similar hues in the same channel. In addition, a cost function is proposed that weights selected factors influencing the print and perceived image quality, namely color accuracy, graininess and ink consumption. The graininess perception is qualitatively assessed using S-CIELAB, a spatial low-pass filtering mimicking the human visual system. By applying it to a large set of samples, a generalized prediction quantifying the perceived graininess is carried out and incorporated as a criterion in the color separation. The results of the proposed model are compared with the separation giving the best colorimetric match, showing improvements in the perceived image quality in terms of graininess at a small cost of color accuracy and ink consumption. (c) 2016 Wiley Periodicals, Inc.

  • 1626.
    Zobel, Valentin
    et al.
    Zuse Institue Berlin.
    Reininghaus, Jan
    Zuse Institue Berlin.
    Hotz, Ingrid
    Zuse Institue Berlin.
    Visualization of Two-Dimensional Symmetric Tensor Fields Using the Heat Kernel Signature2014In: Topological Methods in Data Analysis and Visualization: Theory, Algorithms, and Applications / [ed] Peer-Timo Bremer, Ingrid Hotz, Valerio Pascucci, Ronald Peikert, Springer, 2014, p. 249-262Chapter in book (Refereed)
    Abstract [en]

    We propose a method for visualizing two-dimensional symmetric positive definite tensor fields using the Heat Kernel Signature (HKS). The HKS is derived from the heat kernel and was originally introduced as an isometry invariant shape signature. Each positive definite tensor field defines a Riemannian manifold by considering the tensor field as a Riemannian metric. On this Riemmanian manifold we can apply the definition of the HKS. The resulting scalar quantity is used for the visualization of tensor fields. The HKS is closely related to the Gaussian curvature of the Riemannian manifold and the time parameter of the heat kernel allows a multiscale analysis in a natural way. In this way, the HKS represents field related scale space properties, enabling a level of detail analysis of tensor fields. This makes the HKS an interesting new scalar quantity for tensor fields, which differs significantly from usual tensor invariants like the trace or the determinant. A method for visualization and a numerical realization of the HKS for tensor fields is proposed in this chapter. To validate the approach we apply it to some illustrating simple examples as isolated critical points and to a medical diffusion tensor data set.

  • 1627.
    Zobel, Valentin
    et al.
    Leipzig University, Leipzig, Germany.
    Reininghaus, Jan
    Institute of Science and Technology Austria, Klosterneuburg, Austria.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Visualizing Symmetric Indefinite 2D Tensor Fields using the Heat Kernel Signature2015In: Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data / [ed] Ingrid Hotz, Thomas Schultz, Cham: Springer, 2015, p. 257-267Chapter in book (Refereed)
    Abstract [en]

    The Heat Kernel Signature (HKS) is a scalar quantity which is derived from the heat kernel of a given shape. Due to its robustness, isometry invariance, and multiscale nature, it has been successfully applied in many geometric applications. From a more general point of view, the HKS can be considered as a descriptor of the metric of a Riemannian manifold. Given a symmetric positive definite tensor field we may interpret it as the metric of some Riemannian manifold and thereby apply the HKS to visualize and analyze the given tensor data. In this paper, we propose a generalization of this approach that enables the treatment of indefinite tensor fields, like the stress tensor, by interpreting them as a generator of a positive definite tensor field. To investigate the usefulness of this approach we consider the stress tensor from the two-point-load model example and from a mechanical work piece.

  • 1628.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Enhancing motion segmentation by combination of complementary affinities2012In: Proceedings of the 21st Internationa Conference on Pattern Recognition, 2012, p. 2198-2201Conference paper (Other academic)
    Abstract [en]

    Complementary information, when combined in the right way, is capable of improving clustering and segmentation problems. In this paper, we show how it is possible to enhance motion segmentation accuracy with a very simple and inexpensive combination of complementary information, which comes from the column and row spaces of the same measurement matrix. We test our approach on the Hopkins155 dataset where it outperforms all other state-of-the-art methods.

  • 1629.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    The Weibull manifold in low-level image processing: an application to automatic image focusing.2013In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 31, no 5, p. 401-417Article in journal (Refereed)
    Abstract [en]

    In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy

  • 1630.
    Zukas, Paulius
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Raising Awareness of Computer Vision: How can a single purpose focused CV solution be improved?2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The concept of Computer Vision is not new or fresh. On contrary ideas have been shared and worked on for almost 60 years. Many use cases have been found throughout the years and various systems developed, but there is always a place for improvement. An observation was made, that methods used today are generally focused on a single purpose and implement expensive technology, which could be improved. In this report, we are going to go through an extensive research to find out if a professionally sold, expensive software, can be replaced by an off the shelf, low-cost solution entirely designed and developed in-house. To do that we are going to look at the history of Computer Vision, examples of applications, algorithms, and find general scenarios or computer vision problems which can be solved. We are then going take a step further and define solid use cases for each of the scenarios found. Finally, a prototype solution is going to be designed and presented. After analysing the results gathered we are going to reach out to the reader convincing him/her that such application can be developed and work efficiently in various areas saving investments to businesses.

  • 1631.
    Ärleryd, Sebastian
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Realtime Virtual 3D Image of Kidney Using Pre-Operative CT Image for Geometry and Realtime US-Image for Tracking2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a method is presented to provide a 3D visualization of the human kidney and surrounding tissue during kidney surgery. The method takes advantage of the high detail of 3D X-Ray Computed Tomography (CT) and the high time resolution of Ultrasonography (US). By extracting the geometry from a single preoperative CT scan and animating the kidney by tracking its position in real time US images, a 3D visualization of the surgical volume can be created. The first part of the project consisted of building an imaging phantom as a simplified model of the human body around the kidney. It consists of three parts: the shell part representing surrounding tissue, the kidney part representing the kidney soft tissue and a kidney stone part embedded in the kidney part. The shell and soft tissue kidney parts was cast with a mixture of the synthetic polymer Polyvinyl Alchohol (PVA) and water. The kidney stone part was cast with epoxy glue. All three parts where designed to look like human tissue in CT and US images. The method is a pipeline of stages that starts with acquiring the CT image as a 3D matrix of intensity values. This matrix is then segmented, resulting in separate polygonal 3D models for the three phantom parts. A scan of the model is then performed using US, producing a sequence of US images. A computer program extracts easily recognizable image feature points from the images in the sequence. Knowing the spatial position and orientation of a new US image in which these features can be found again allows the position of the kidney to be calculated. The presented method is realized as a proof of concept implementation of the pipeline. The implementation displays an interactive visualization where the kidney is positioned according to a user-selected US image scanned for image features. Using the proof of concept implementation as a guide, the accuracy of the proposed method is estimated to be bounded by the acquired image data. For high resolution CT and US images, the accuracy can be in the order of a few millimeters. 

  • 1632.
    Åhlen, Julia
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Color correction of underwater images based on estimation of diffuse attenuation coefficients2003In: Proceedings of 3rd conference for the promotion of research in IT, 2003Conference paper (Other scientific)
  • 1633.
    Åhlén, Julia
    Uppsala University, Interfaculty Units, Centre for Image Analysis.
    Colour Correction of Underwater Images Using Spectral Data2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    For marine sciences sometimes there is a need to perform underwater photography. Optical properties of light cause severe quality problems for underwater photography. Light of different energies is absorbed at highly different rates under water causing significant bluishness of the images. If the colour dependent attenuation under water can be properly estimated it should be possible to use computerised image processing to colour correct digital images using Beer’s Law.

    In this thesis we have developed such estimation and correction methods that have become progressively more complicated and more accurate giving successively better correction results. A process of estimation of downwelling attenuation coefficients from multi or hyper spectral data is a basis for automatic colour restoration of underwater taken images. The results indicate that for each diving site the unique and precise coefficients can be obtained.

    All standard digital cameras have built in white balancing and colour enhancement functions designed to make the images as aesthetically pleasing as possible. These functions can in most cameras not be switched off and the algorithms used are proprietary and undocumented. However, these enhancement functions can be estimated. Applying their reverse creates un-enhanced images and we show that our algorithms for underwater colour correction works significantly better when applied to such images.

    Finally, we have developed a method that uses point spectra from the spectrometer together with RGB colour images from a camera to generate pseudo-hyper-spectral images. Each of these can then be colour corrected. Finally, the images can be weighted together in the proportions needed to create new correct RGB images. This method is somewhat computationally demanding but gives very encouraging results.

    The algorithms and applications presented in this thesis show that automatic colour correction of underwater images can increase the credibility of data taken underwater for marine scientific purposes.

  • 1634. Åhlén, Julia
    et al.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Automatic Water Body Extraction From Remote Sensing Images Using Entropy2015In: SGEM2015 Conference Proceedings, 2015, Vol. 2, p. 517-524Conference paper (Refereed)
    Abstract [en]

    This research focuses on automatic extraction of river banks and other inland waters from remote sensing images. There are no up to date accessible databases of rivers and most of other waters objects for modelling purposes. The main reason for that is that some regions are hard to access with the traditional ground through techniques and thus the boundary of river banks are uncertain in many geographical positions. The other reason is the limitations of widely applied method for extraction of water bodies called normalized-difference water index (NDWI). There is a novel approach to extract water bodies, which is based on pixel level variability or entropy, however, the methods work somewhat satisfactory on high spatial resolution images, there is no verification of the method performance on moderate or low resolution images. Problems encounter identification of mixed water pixels and e.g. roads, which are built in attachment to river banks and thus can be classified as rivers. In this work we propose an automatic extraction of river banks using image entropy, combined with NDWI identification. In this study only moderate spatial resolution Landsat TM are tested. Areas of interest include both major river banks and inland lakes. Calculating entropy on such poor spatial resolution images will lead to misinterpretation of water bodies, which all exhibits the same small variation of pixel values as e.g. some open or urban areas. Image entropy thus is calculated with the modification that involves the incorporation of local normalization index or variability coefficient. NDWI will produce an image where clear water exhibits large difference comparing to other land features. We are presenting an algorithm that uses an NDWI prior to entropy processing, so that bands used to calculate it, are chosen in clear connection to water body features that are clearly discernible.As a result we visualize a clear segmentation of the water bodies from the remote sensing images and verify the coordinates with a given geographic reference.

  • 1635.
    Åhlén, Julia
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Land management, GIS.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Uppsala University, Department of Information Technology, Sweden .
    Automatic water body extraction from remote sensing images using entropy2015In: Proceedings of the International Multidisciplinary Scientific GeoConference SGEM, 2015, Vol. 4, p. 517-524Conference paper (Refereed)
    Abstract [en]

    This research focuses on automatic extraction of river banks and other inland waters from remote sensing images. There are no up to date accessible databases of rivers and most of other waters objects for modelling purposes. The main reason for that is that some regions are hard to access with the traditional ground through techniques and thus the boundary of river banks are uncertain in many geographical positions. The other reason is the limitations of widely applied method for extraction of water bodies called normalized-difference water index (NDWI). There is a novel approach to extract water bodies, which is based on pixel level variability or entropy, however, the methods work somewhat satisfactory on high spatial resolution images, there is no verification of the method performance on moderate or low resolution images. Problems encounter identification of mixed water pixels and e.g. roads, which are built in attachment to river banks and thus can be classified as rivers. In this work we propose an automatic extraction of river banks using image entropy, combined with NDWI identification. In this study only moderate spatial resolution Landsat TM are tested. Areas of interest include both major river banks and inland lakes. Calculating entropy on such poor spatial resolution images will lead to misinterpretation of water bodies, which all exhibits the same small variation of pixel values as e.g. some open or urban areas. Image entropy thus is calculated with the modification that involves the incorporation of local normalization index or variability coefficient. NDWI will produce an image where clear water exhibits large difference comparing to other land features. We are presenting an algorithm that uses an NDWI prior to entropy processing, so that bands used to calculate it, are chosen in clear connection to water body features that are clearly discernible.As a result we visualize a clear segmentation of the water bodies from the remote sensing images and verify the coordinates with a given geographic reference.

  • 1636.
    Åhlén, Julia
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Urban and regional planning/GIS-institute.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Early Recognition of Smoke in Digital Video2010In: Advances in Communications, Computers, Systems, Circuits and Devices: European Conference of Systems, ECS'10, European Conference of Circuits Technology and Devices, ECCTD'10, European Conference of Communications, ECCOM'10, ECCS'10 / [ed] Mladenov, V; Psarris, K; Mastorakis, N; Caballero, A; Vachtsevanos, G, Athens: World Scientific and Engineering Academy and Society, 2010, p. 301-306Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for direct smoke detection from video without enhancement pre-processing steps. Smoke is characterized by transparency, gray color and irregularities in motion, which are hard to describe with the basic image features. A method for robust smoke description using a color balancing algorithm and turbulence calculation is presented in this work. Background extraction is used as a first step in processing. All moving objects are candidates for smoke. We make use of Gray World algorithm and compare the results with the original video sequence in order to extract image features within some particular gray scale interval. As a last step we calculate shape complexity of turbulent phenomena and apply it to the incoming video stream. As a result we extract only smoke from the video. Features such shadows, illumination changes and people will not be mistaken for smoke by the algorithm. This method gives an early indication of smoke in the observed scene.

  • 1637.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Knowledge Based Single Building Extraction and Recognition2014In: Proceedings WSEAS International Conference on Computer Engineering and Applications, 2014, 2014, p. 29-35Conference paper (Refereed)
    Abstract [en]

    Building facade extraction is the primary step in the recognition process in outdoor scenes. It is also achallenging task since each building can be viewed from different angles or under different lighting conditions. Inoutdoor imagery, regions, such as sky, trees, pavement cause interference for a successful building facade recognition.In this paper we propose a knowledge based approach to automatically segment out the whole facade or majorparts of the facade from outdoor scene. The found building regions are then subjected to recognition process. Thesystem is composed of two modules: segmentation of building facades region module and facade recognition module.In the facade segmentation module, color processing and objects position coordinates are used. In the facaderecognition module, Chamfer metrics are applied. In real time recognition scenario, the image with a building isfirst analyzed in order to extract the facade region, which is then compared to a database with feature descriptors inorder to find a match. The results show that the recognition rate is dependent on a precision of building extractionpart, which in turn, depends on a homogeneity of colors of facades.

  • 1638.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    TIME-SPACE VISUALISATION OF AMUR RIVER CHANNEL CHANGES DUE TO FLOODING DISASTER2014In: Proceedings of International Multidisciplinary Scientific GeoScience Conference (SGEM), 2014, 2014Conference paper (Refereed)
    Abstract [en]

    The analysis of flooding levels is a highly complex temporal and spatial assessment task that involves estimation of distances between references in geographical space as well as estimations of instances along the time-line that coincide with given spatial locations. This work has an aim to interactively explore changes of Amur River boundaries caused by the severe flooding in September 2013. In our analysis of river bank changes we use satellite imagery (Landsat 7) to extract parts belonging to Amur River. We use imagery from that covers time interval July 2003 until February 2014. Image data is pre-processed using low level image processing techniques prior to visualization. Pre-processing has a purpose to extract information about the boundaries of the river, and to transform it into a vectorized format, suitable as inputs subsequent visualization. We develop visualization tools to explore the spatial and temporal relationship in the change of river banks. In particular the visualization shall allow for exploring specific geographic locations and their proximity to the river/floods at arbitrary times. We propose a time space visualization that emanates from edge detection, morphological operations and boundary statistics on Landsat 2D imagery in order to extract the borders of Amur River. For the visualization we use the time-spacecube metaphor. It is based on a 3D rectilinear context, where the 2D geographical coordinate system is extended with a time-axis pointing along the 3rd Cartesian axis. Such visualization facilitates analysis of the channel shape of Amur River and thus enabling for a conclusion regarding the defined problem. As a result we demonstrate our time-space visualization for river Amur and using some amount of geographical point data as a reference we suggest an adequate method of interpolation or imputation that can be employed to estimate value at a given location and time.

  • 1639.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Liu, Fei
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Evaluation of the Automatic methods for Building Extraction2014In: International Journal of Computers and Communications, ISSN 2074-1294, Vol. 8, p. 171-176Article in journal (Refereed)
  • 1640.
    Åhlén Julia, Sundgren David
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bottom Reflectance Influence on a Color Correction Algorithm for Underwater Images2003In: 13th Scandinavian Conference, SCIA 2003 Göteborg, Sweden, June 29-July 2, 2003, 2003, p. 922-926Conference paper (Refereed)
  • 1641.
    Åhlén, Julia
    et al.
    Uppsala universitet.
    Sundgren, David
    Stockholms universitet.
    Bottom Reflectance Influence on a Color Correction Algorithm for Underwater Images2003In: Proceedings of the 13th Scandinavinan Conference on Image Analysis / [ed] Bigun, J., Gustavsson, T., Berlin: Springer , 2003, p. 922-926Conference paper (Refereed)
    Abstract [en]

    Diminishing the negative effects of water column introduced on digital underwater images is the aim of a color correction algorithm presented by the authors in a previous paper. The present paper describes an experimental result and set of calculations for determining the impact of bottom reflectance on the algorithm's performance. This concept is based on the estimation of the relative reflectance of various bottom types such as sand, bleached corals and algae. We describe the adverse effects of extremely low and high bottom reflectances on the algorithm.

  • 1642.
    Åhlén, Julia
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Sundgren, David
    KTH.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Pre-Processing of Underwater Images Taken in shallow Water for Color Reconstruction Purposes2005In: IASTED Proceeding (479): IASTED 7th Conference on Signal and Image Processing - 2005, 2005Conference paper (Refereed)
    Abstract [en]

    Coral reefs are monitored with different techniques in or der to examine their health. Digital cameras, which pro vide an economically defendable tool for marine scientists to collect underwater data, tend to produce bluish images due to severe absorption of light at longer wavelengths. In this paper we study the possibilities of correcting for this color distortion through image processing. The decrease of red light by depth can be predicted by Beer’s Law. An other parameter that has been taken into account is the image enhancement functions built into the camera. We use a spectrometer and a reflectance standard to obtain the data needed to approximate the joint effect of these func tions. This model is used to pre-process the underwater images taken by digital cameras so that the red, green and blue channels show correct values before the images are subjected to correction for the effects of the water column through application of Beer’s Law. This process is fully automatic and the amount of processed images is limited only by the speed of computer system. Experimental re sults show that the proposed method works well for cor recting images taken at different depths with two different cameras.

  • 1643.
    Åhlén, Julia
    et al.
    University of Gävle, Department of Mathematics, Natural and Computer Sciences, Ämnesavdelningen för datavetenskap.
    Sundgren, David
    University of Gävle, Department of Mathematics, Natural and Computer Sciences, Ämnesavdelningen för matematik och statistik.
    Bengtsson, Ewert
    Pre-Processing of Underwater Images Taken in Shallow Waters for Color Reconstruction Purposes2005In: Proceedings of the 7th IASTED International Conference on Signal and Image Processing, 2005Conference paper (Refereed)
    Abstract [en]

    Coral reefs are monitored with different techniques in order to examine their health. Digital cameras, which provide an economically defendable tool for marine scientists to collect underwater data, tend to produce bluish images due to severe absorption of light at longer wavelengths. In this paper we study the possibilities of correcting for this color distortion through image processing. The decrease of red light by depth can be predicted by Beer's law. Another parameter that has to be taken into account is the image enhancement functions built into the camera. We use a spectrometer and a reflectance standard to obtain the data needed to approximate the joint effect of these functions. This model is used to pre-process the underwater images taken by digital cameras so that the red, green and blue channels show correct values before the images are subjected to correction for the effects of water column through application of Beer's law. This process is fully automatic and the amount of processed images is limited only by the speed of computer system. Experimental results show that the proposed method works well for correcting images taken at different depths with two different cameras.

  • 1644.
    Åhlén, Julia
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Sundgren, David
    KTH.
    Lindell, Tommy
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Dissolved Organic Matters Impact on Colour2005In: Image Analysis: 14th Scandinavian Conference, SCIA 2005, 2005, p. 1148-1156Conference paper (Refereed)
    Abstract [en]

    The natural properties of water column usually affect under-water

    imagery by suppressing high-energy light. In application such as

    color correction of underwater images estimation of water column parameters is crucial. Diffuse attenuation coefficients are estimated and used for further processing of underwater taken data. The coefficients will give information on how fast light of different wavelengths decreases with increasing depth. Based on the exact depth measurements and data from a spectrometer the calculation of downwelling irradiance will be done. Chlorophyll concentration and a yellow substance factor contribute to a great variety of values of attenuation coefficients at different depth. By taking advantage of variations in depth, a method is presented to

    estimate the in uence of dissolved organic matters and chlorophyll on color correction. Attenuation coefficients that depends on concentration of dissolved organic matters in water gives an indication on how well any spectral band is suited for color correction algorithm.

  • 1645.
    Åkerlind, Christina
    et al.
    Linköping University, Department of Physics, Chemistry and Biology. Linköping University, Faculty of Science & Engineering. FOI, Linköping, Sweden.
    Fagerström, Jan
    FOI, Linköping, Sweden.
    Hallberg, Tomas
    FOI, Linköping, Sweden.
    Kariis, Hans
    FOI, Linköping, Sweden.
    Evaluation criteria for spectral design of camouflage2015In: Proc. SPIE 9653, Target and Background Signatures / [ed] Karin U. Stein; Ric H. M. A. Schleijpen, SPIE - International Society for Optical Engineering, 2015, Vol. 9653, p. Art.no: 9653-2-Conference paper (Refereed)
    Abstract [en]

    In development of visual (VIS) and infrared (IR) camouflage for signature management, the aim is the design of surface properties of an object to spectrally match or adapt to a background and thereby minimizing the contrast perceived by a threatening sensor. The so called 'ladder model" relates the requirements for task measure of effectiveness with surface structure properties through the steps signature effectiveness and object signature. It is intended to link materials properties via platform signature to military utility and vice versa. Spectral design of a surface intends to give it a desired wavelength dependent optical response to fit a specific application of interest. Six evaluation criteria were stated, with the aim to aid the process to put requirement on camouflage and for evaluation. The six criteria correspond to properties such as reflectance, gloss, emissivity, and degree of polarization as well as dynamic properties, and broadband or multispectral properties. These criteria have previously been exemplified on different kinds of materials and investigated separately. Anderson and Åkerlind further point out that the six criteria rarely were considered or described all together in one and same publication previously. The specific level of requirement of the different properties must be specified individually for each specific situation and environment to minimize the contrast between target and a background. The criteria or properties are not totally independent of one another. How they are correlated is part of the theme of this paper. However, prioritization has been made due to the limit of space. Therefore all of the interconnections between the six criteria will not be considered in the work of this report. The ladder step previous to digging into the different material composition possibilities and choice of suitable materials and structures (not covered here), includes the object signature and decision of what the spectral response should be, when intended for a specific environment. The chosen spectral response should give a low detection probability (DP). How detection probability connects to image analysis tools and implementation of the six criteria is part of this work.

  • 1646.
    Åström, Freddie
    et al.
    Heidelberg University, Germany.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Baravdish, George
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Mapping-Based Image Diffusion2017In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 57, no 3, p. 293-323Article in journal (Refereed)
    Abstract [en]

    In this work, we introduce a novel tensor-based functional for targeted image enhancement and denoising. Via explicit regularization, our formulation incorporates application-dependent and contextual information using first principles. Few works in literature treat variational models that describe both application-dependent information and contextual knowledge of the denoising problem. We prove the existence of a minimizer and present results on tensor symmetry constraints, convexity, and geometric interpretation of the proposed functional. We show that our framework excels in applications where nonlinear functions are present such as in gamma correction and targeted value range filtering. We also study general denoising performance where we show comparable results to dedicated PDE-based state-of-the-art methods.

  • 1647.
    Åström, Freddie
    et al.
    Heidelberg Collaboratory for Image Processing Heidelberg University Heidelberg, Germany.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Scharr, Hanno
    BG-2: Plant Sciences Forschungszentrum Jülich 52425, Jülich, Germany.
    Adaptive sharpening of multimodal distributions2015In: Colour and Visual Computing Symposium (CVCS), 2015 / [ed] Marius Pedersen and Jean-Baptiste Thomas, IEEE , 2015, p. 1-4Conference paper (Refereed)
    Abstract [en]

    In this work we derive a novel framework rendering measured distributions into approximated distributions of their mean. This is achieved by exploiting constraints imposed by the Gauss-Markov theorem from estimation theory, being valid for mono-modal Gaussian distributions. It formulates the relation between the variance of measured samples and the so-called standard error, being the standard deviation of their mean. However, multi-modal distributions are present in numerous image processing scenarios, e.g. local gray value or color distributions at object edges, or orientation or displacement distributions at occlusion boundaries in motion estimation or stereo. Our method not only aims at estimating the modes of these distributions together with their standard error, but at describing the whole multi-modal distribution. We utilize the method of channel representation, a kind of soft histogram also known as population codes, to represent distributions in a non-parametric, generic fashion. Here we apply the proposed scheme to general mono- and multimodal Gaussian distributions to illustrate its effectiveness and compliance with the Gauss-Markov theorem.

  • 1648.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Adaptive Supervision Online Learning for Vision Based Autonomous Systems2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Driver assistance systems in modern cars now show clear steps towards autonomous driving and improvements are presented in a steady pace. The total number of sensors has also decreased from the vehicles of the initial DARPA challenge, more resembling a pile of sensors with a car underneath. Still, anyone driving a tele-operated toy using a video link is a demonstration that a single camera provides enough information about the surronding world.  

    Most lane assist systems are developed for highway use and depend on visible lane markers. However, lane markers may not be visible due to snow or wear, and there are roads without lane markers. With a slightly different approach, autonomous road following can be obtained on almost any kind of road. Using realtime online machine learning, a human driver can demonstrate driving on a road type unknown to the system and after some training, the system can seamlessly take over. The demonstrator system presented in this work has shown capability of learning to follow different types of roads as well as learning to follow a person. The system is based solely on vision, mapping camera images directly to control signals.  

    Such systems need the ability to handle multiple-hypothesis outputs as there may be several plausible options in similar situations. If there is an obstacle in the middle of the road, the obstacle can be avoided by going on either side. However the average action, going straight ahead, is not a viable option. Similarly, at an intersection, the system should follow one road, not the average of all roads.  

    To this end, an online machine learning framework is presented where inputs and outputs are represented using the channel representation. The learning system is structurally simple and computationally light, based on neuropsychological ideas presented by Donald Hebb over 60 years ago. Nonetheless the system has shown a cabability to learn advanced tasks. Furthermore, the structure of the system permits a statistical interpretation where a non-parametric representation of the joint distribution of input and output is generated. Prediction generates the conditional distribution of the output, given the input.  

    The statistical interpretation motivates the introduction of priors. In cases with multiple options, such as at intersections, a prior can select one mode in the multimodal distribution of possible actions. In addition to the ability to learn from demonstration, a possibility for immediate reinforcement feedback is presented. This allows for a system where the teacher can choose the most appropriate way of training the system, at any time and at her own discretion.  

    The theoretical contributions include a deeper analysis of the channel representation. A geometrical analysis illustrates the cause of decoding bias commonly present in neurologically inspired representations, and measures to counteract it. Confidence values are analyzed and interpreted as evidence and coherence. Further, the use of the truncated cosine basis function is motivated.  

    Finally, a selection of applications is presented, such as autonomous road following by online learning and head pose estimation. A method founded on the same basic principles is used for visual tracking, where the probabilistic representation of target pixel values allows for changes in target appearance.

  • 1649.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    LEAP, A Platform for Evaluation of Control Algorithms2010Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Most people are familiar with the BRIO labyrinth game and the challenge of guiding the ball through the maze. The goal of this project was to use this game to create a platform for evaluation of control algorithms. The platform was used to evaluate a few different controlling algorithms, both traditional automatic control algorithms as well as algorithms based on online incremental learning.

    The game was fitted with servo actuators for tilting the maze. A camera together with computer vision algorithms were used to estimate the state of the game. The evaluated controlling algorithm had the task of calculating a proper control signal, given the estimated state of the game.

    The evaluated learning systems used traditional control algorithms to provide initial training data. After initial training, the systems learned from their own actions and after a while they outperformed the controller used to provide initial training.

  • 1650.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning for Robot Vision2014Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35].

    Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods.

    This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.

3031323334 1601 - 1650 of 1668
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf