Change search
Refine search result
123 1 - 50 of 146
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    A survey on periocular biometrics research2016In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 82, part 2, p. 92-105Article in journal (Refereed)
    Abstract [en]

    Periocular refers to the facial region in the vicinity of the eye, including eyelids, lashes and eyebrows. While face and irises have been extensively studied, the periocular region has emerged as a promising trait for unconstrained biometrics, following demands for increased robustness of face or iris systems. With a surprisingly high discrimination ability, this region can be easily obtained with existing setups for face and iris, and the requirement of user cooperation can be relaxed, thus facilitating the interaction with biometric systems. It is also available over a wide range of distances even when the iris texture cannot be reliably obtained (low resolution) or under partial face occlusion (close distances). Here, we review the state of the art in periocular biometrics research. A number of aspects are described, including: (i) existing databases, (ii) algorithms for periocular detection and/or segmentation, (iii) features employed for recognition, (iv) identification of the most discriminative regions of the periocular area, (v) comparison with iris and face modalities, (vi) soft-biometrics (gender/ethnicity classification), and (vii) impact of gender transformation and plastic surgery on the recognition accuracy. This work is expected to provide an insight of the most relevant issues in periocular biometrics, giving a comprehensive coverage of the existing literature and current state of the art. © 2015 Elsevier B.V. All rights reserved.

  • 2.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    An Overview of Periocular Biometrics2017In: Iris and Periocular Biometric Recognition / [ed] Christian Rathgeb & Christoph Busch, London: The Institution of Engineering and Technology , 2017, p. 29-53Chapter in book (Refereed)
    Abstract [en]

    Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.

  • 3.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Best Regions for Periocular Recognition with NIR and Visible Images2014In: 2014 IEEE International Conference on Image Processing (ICIP), Piscataway, NJ: IEEE Press, 2014, p. 4987-4991Conference paper (Refereed)
    Abstract [en]

    We evaluate the most useful regions for periocular recognition. For this purpose, we employ our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the spectrum. We use both NIR and visible iris images. The best regions are selected via Sequential Forward Floating Selection (SFFS). The iris neighborhood (including sclera and eyelashes) is found as the best region with NIR data, while the surrounding skin texture (which is over-illuminated in NIR images) is the most discriminative region in visible range. To the best of our knowledge, only one work in the literature has evaluated the influence of different regions in the performance of periocular recognition algorithms. Our results are in the same line, despite the use of completely different matchers. We also evaluate an iris texture matcher, providing fusion results with our periocular system as well. © 2014 IEEE.

  • 4.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Biometric Recognition Using Periocular Images2013Conference paper (Other academic)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum at different frequencies and orientations. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, and 4) rotation compensation between query and test images. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region.

  • 5.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Exploting Periocular and RGB Information in Fake Iris Detection2014In: 2014 37th International Conventionon Information and Communication Technology, Electronics and Microelectronics (MIPRO): 26 – 30 May 2014 Opatija, Croatia: Proceedings / [ed] Petar Biljanovic, Zeljko Butkovic, Karolj Skala, Stjepan Golubic, Marina Cicin-Sain, Vlado Sruk, Slobodan Ribaric, Stjepan Gros, Boris Vrdoljak, Mladen Mauher & Goran Cetusic, Rijeka: Croatian Society for Information and Communication Technology, Electronics and Microelectronics - MIPRO , 2014, p. 1354-1359Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied by several researchers. However, to date, the experimental setup has been limited to near-infrared (NIR) sensors, which provide grey-scale images. This work makes use of images captured in visible range with color (RGB) information. We employ Gray-Level CoOccurrence textural features and SVM classifiers for the task of fake iris detection. The best features are selected with the Sequential Forward Floating Selection (SFFS) algorithm. To the best of our knowledge, this is the first work evaluating spoofing attack using color iris images in visible range. Our results demonstrate that the use of features from the three color channels clearly outperform the accuracy obtained from the luminance (gray scale) image. Also, the R channel is found to be the best individual channel. Lastly, we analyze the effect of extracting features from selected (eye or periocular) regions only. The best performance is obtained when GLCM features are extracted from the whole image, highlighting that both the iris and the surrounding periocular region are relevant for fake iris detection. An added advantage is that no accurate iris segmentation is needed. This work is relevant due to the increasing prevalence of more relaxed scenarios where iris acquisition using NIR light is unfeasible (e.g. distant acquisition or mobile devices), which are putting high pressure in the development of algorithms capable of working with visible light. © 2014 MIPRO.

  • 6.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eye Detection by Complex Filtering for Periocular Recognition2014In: 2nd International Workshop on Biometrics and Forensics (IWBF2014): Valletta, Malta (27-28th March 2014), Piscataway, NJ: IEEE Press, 2014, article id 6914250Conference paper (Refereed)
    Abstract [en]

    We present a novel system to localize the eye position based on symmetry filters. By using a 2D separable filter tuned to detect circular symmetries, detection is done with a few ID convolutions. The detected eye center is used as input to our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the local power spectrum. This setup is evaluated with two databases of iris data, one acquired with a close-up NIR camera, and another in visible light with a web-cam. The periocular system shows high resilience to inaccuracies in the position of the detected eye center. The density of the sampling grid can also be reduced without sacrificing too much accuracy, allowing additional computational savings. We also evaluate an iris texture matcher based on ID Log-Gabor wavelets. Despite the poorer performance of the iris matcher with the webcam database, its fusion with the periocular system results in improved performance. ©2014 IEEE.

  • 7.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fake Iris Detection: A Comparison Between Near-Infrared and Visible Images2014In: Proceedings: 10th International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2014 / [ed] Kokou Yetongnon, Albert Dipanda & Richard Chbeir, Piscataway, NJ: IEEE Computer Society, 2014, p. 546-553Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied so far using near-infrared sensors (NIR), which provide grey scale-images, i.e. With luminance information only. Here, we incorporate into the analysis images captured in visible range, with color information, and perform comparative experiments between the two types of data. We employ Gray-Level Cocurrence textural features and SVM classifiers. These features analyze various image properties related with contrast, pixel regularity, and pixel co-occurrence statistics. We select the best features with the Sequential Forward Floating Selection (SFFS) algorithm. We also study the effect of extracting features from selected (eye or periocular) regions only. Our experiments are done with fake samples obtained from printed images, which are then presented to the same sensor than the real ones. Results show that fake images captured in NIR range are easier to detect than visible images (even if we down sample NIR images to equate the average size of the iris region between the two databases). We also observe that the best performance with both sensors can be obtained with features extracted from the whole image, showing that not only the eye region, but also the surrounding periocular texture is relevant for fake iris detection. An additional source of improvement with the visible sensor also comes from the use of the three RGB channels, in comparison with the luminance image only. A further analysis also reveals that some features are best suited to one particular sensor than the others. © 2014 IEEE

  • 8.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Halmstad University submission to the First ICB Competition on Iris Recognition (ICIR2013)2013Other (Other academic)
  • 9.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Iris Boundaries Segmentation Using the Generalized Structure Tensor: A Study on the Effects of Image Degradation2012In: Biometrics: Theory, Applications and Systems (BTAS), 2012 IEEE Fifth International Conference on, Piscataway, N.J.: IEEE Press, 2012, p. 426-431, article id 6374610Conference paper (Refereed)
    Abstract [en]

    We present a new iris segmentation algorithm based onthe Generalized Structure Tensor (GST), which also includesan eyelid detection step. It is compared with traditionalsegmentation systems based on Hough transformand integro-differential operators. Results are given usingthe CASIA-IrisV3-Interval database. Segmentation performanceunder different degrees of image defocus and motionblur is also evaluated. Reported results shows the effectivenessof the proposed algorithm, with similar performancethan the others in pupil detection, and clearly betterperformance for sclera detection for all levels of degradation.Verification results using 1D Log-Gabor wavelets arealso given, showing the benefits of the eyelids removal step.These results point out the validity of the GST as an alternativeto other iris segmentation systems. © 2012 IEEE.

  • 10.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Iris Pupil Detection by Structure Tensor Analysis2011Conference paper (Other academic)
    Abstract [en]

    This paper present a pupil detection/segmentation algorithm for iris images based on Structure Tensor analysis. Eigenvalues of the structure tensor matrix have been observed to be high in pupil boundaries and specular reflections of iris images. We exploit this fact to detect the specular reflections region and the boundary of the pupil in a sequential manner. Experimental results are given using the CASIA-IrisV3-Interval database (249 contributors, 396 different eyes, 2,639 iris images). Results show that our algorithm works specially well in detecting the specular reflections (98.98% success rate) and pupil boundary detection is correctly done in 84.24% of the images.

  • 11.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Iris Segmentation Using the Generalized Structure Tensor2012Conference paper (Other academic)
    Abstract [en]

    We present a new iris segmentation algorithm based on the Generalized Structure Tensor (GST). We compare this approach with traditional iris segmentation systems based on Hough transform and integro-differential operators. Results are given using the CASIA-IrisV3-Interval database with respect to a segmentation made manually by a human expert. The proposed algorithm outperforms the baseline approaches, pointing out the validity of the GST as an alternative to classic iris segmentation systems. We also detect the cross positions between the eyelids and the outer iris boundary. Verification results using a publicly available iris recognition system based on 1D Log-Gabor wavelets are also given, showing the benefits of the eyelids removal step.

  • 12.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Near-infrared and visible-light periocular recognition with Gabor features using frequency-adaptive automatic eye detection2015In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 4, no 2, p. 74-89Article in journal (Refereed)
    Abstract [en]

    Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios. We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training. Also, separability of the filters allows faster detection via one-dimensional convolutions. This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition. The evaluation framework is composed of six databases acquired both with near-infrared and visible sensors. The experimental setup is complemented with four iris matchers, used for fusion experiments. The eye detection system presented shows very high accuracy with near-infrared data, and a reasonable good accuracy with one visible database. Regarding the periocular system, it exhibits great robustness to small errors in locating the eye centre, as well as to scale changes of the input image. The density of the sampling grid can also be reduced without sacrificing accuracy. Lastly, despite the poorer performance of the iris matchers with visible data, fusion with the periocular system can provide an improvement of more than 20%. The six databases used have been manually annotated, with the annotation made publicly available. © The Institution of Engineering and Technology 2015.

  • 13.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016In: 2016 4th International Workshop on Biometrics and Forensics (IWBF): Proceedings : 3-4 March, 2016, Limassol, Cyprus, Piscataway, NJ: IEEE, 2016, article id 7449688Conference paper (Refereed)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a trade-off between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed. © 2016 IEEE.

  • 14.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Recognition Using Retinotopic Sampling and Gabor Decomposition2012In: Computer Vision – ECCV 2012: Workshops and demonstrations : Florence, Italy, October 7-13, 2012, Proceedings. Part II / [ed] Fusiello, Andrea; Murino, Vittorio; Cucchiara, Rita, Berlin: Springer, 2012, Vol. 7584, p. 309-318Conference paper (Refereed)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. © 2012 Springer-Verlag.

  • 15.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Factors Affecting Iris Segmentation and Matching2013In: Proceedings – 2013 International Conference on Biometrics, ICB 2013 / [ed] Julian Fierrez, Ajay Kumar, Mayank Vatsa, Raymond Veldhuis & Javier Ortega-Garcia, Piscataway, N.J.: IEEE conference proceedings, 2013, article id 6613016Conference paper (Refereed)
    Abstract [en]

    Image degradations can affect the different processing steps of iris recognition systems. With several quality factors proposed for iris images, its specific effect in the segmentation accuracy is often obviated, with most of the efforts focused on its impact in the recognition accuracy. Accordingly, we evaluate the impact of 8 quality measures in the performance of iris segmentation. We use a database acquired with a close-up iris sensor and built-in quality checking process. Despite the latter, we report differences in behavior, with some measures clearly predicting the segmentation performance, while others giving inconclusive results. Recognition experiments with two matchers also show that segmentation and matching performance are not necessarily affected by the same factors. The resilience of one matcher to segmentation inaccuracies also suggest that segmentation errors due to low image quality are not necessarily revealed by the matcher, pointing out the importance of separate evaluation of the segmentation accuracy. © 2013 IEEE.

  • 16.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018Conference paper (Refereed)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

  • 17.
    Alonso-Fernandez, Fernando
    et al.
    ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Fierrez, Julian
    ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Ortega-Garcia, Javier
    ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fingerprint Recognition2009In: Guide to Biometric Reference Systems and Performance Evaluation / [ed] Dijana Petrovska-Delacrétaz, Gérard Chollet, Bernadette Dorizzi, London: Springer London, 2009, p. 51-88Chapter in book (Other academic)
    Abstract [en]

    First, an overview of the state of the art in fingerprint recognition is presented, including current issues and challenges. Fingerprint databases and evaluation campaigns, are also summarized. This is followed by the description of the BioSecure Benchmarking Framework for Fingerprints, using the NIST Fingerpint Image Software (NFIS2), the publicly available MCYT-100 database, and two evaluation protocols. Two research systems are compared within the proposed framework. The evaluated systems follow different approaches for fingerprint processing and are discussed in detail. Fusion experiments involving different combinations of the presented systems are also given. The NFIS2 software is also used to obtain the fingerprint scores for the multimodal experiments conducted within the BioSecure Multimodal Evaluation Campaign(BMEC’2007) reported in Chap.11.

  • 18.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eigen-patch iris super-resolution for iris recognition improvement2015In: 2015 23rd European Signal Processing Conference (EUSIPCO), Piscataway, NJ: IEEE Press, 2015, p. 76-80, article id 7362348Conference paper (Refereed)
    Abstract [en]

    Low image resolution will be a predominant factor in iris recognition systems as they evolve towards more relaxed acquisition conditions. Here, we propose a super-resolution technique to enhance iris images based on Principal Component Analysis (PCA) Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information and reducing artifacts. We validate the system used a database of 1,872 near-infrared iris images. Results show the superiority of the presented approach over bilinear or bicubic interpolation, with the eigen-patch method being more resilient to image resolution reduction. We also perform recognition experiments with an iris matcher based 1D Log-Gabor, demonstrating that verification rates degrades more rapidly with bilinear or bicubic interpolation. ©2015 IEEE

  • 19.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Improving Very Low-Resolution Iris Identification Via Super-Resolution Reconstruction of Local Patches2017In: 2017 International Conference of the Biometrics Special Interest Group (BIOSIG) / [ed] Arslan Brömme, Christoph Busch, Antitza Dantcheva, Christian Rathgeb & Andreas Uhl, Bonn: Gesellschaft für Informatik, 2017, Vol. P-270, article id 8053512Conference paper (Refereed)
    Abstract [en]

    Relaxed acquisition conditions in iris recognition systems have significant effects on the quality and resolution of acquired images, which can severely affect performance if not addressed properly. Here, we evaluate two trained super-resolution algorithms in the context of iris identification. They are based on reconstruction of local image patches, where each patch is reconstructed separately using its own optimal reconstruction function. We employ a database of 1,872 near-infrared iris images (with 163 different identities for identification experiments) and three iris comparators. The trained approaches are substantially superior to bilinear or bicubic interpolations, with one of the comparators providing a Rank-1 performance of ∼88% with images of only 15×15 pixels, and an identification rate of 95% with a hit list size of only 8 identities. © 2017 Gesellschaft fuer Informatik.

  • 20.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Iris Super-Resolution Using Iterative Neighbor Embedding2017In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops / [ed] Lisa O’Conner, Los Alamitos: IEEE Computer Society, 2017, p. 655-663Conference paper (Refereed)
    Abstract [en]

    Iris recognition research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severely affecting the accuracy of recognition systems if not tackled appropriately. In this paper, we evaluate a super-resolution algorithm used to reconstruct iris images based on iterative neighbor embedding of local image patches which tries to represent input low-resolution patches while preserving the geometry of the original high-resolution space. To this end, the geometry of the low- and high-resolution manifolds are jointly considered during the reconstruction process. We validate the system with a database of 1,872 near-infrared iris images, while fusion of two iris comparators has been adopted to improve recognition performance. The presented approach is substantially superior to bilinear/bicubic interpolations at very low resolutions, and it also outperforms a previous PCA-based iris reconstruction approach which only considers the geometry of the low-resolution manifold during the reconstruction process. © 2017 IEEE

  • 21.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Reconstruction of Smartphone Images for Low Resolution Iris Recognition2015In: 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Piscataway, NJ: IEEE Press, 2015, article id 7368600Conference paper (Refereed)
    Abstract [en]

    As iris systems evolve towards a more relaxed acquisition, low image resolution will be a predominant issue. In this paper we evaluate a super-resolution method to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. We employ a database of 560 images captured in visible spectrum with two smartphones. The presented approach is superiorto bilinear or bicubic interpolation, especially at lower resolutions. We also carry out recognition experiments with six iris matchers, showing that better performance can be obtained at low-resolutions with the proposed eigen-patch reconstruction, with fusion of only two systems pushing the EER to below 5-8% for down-sampling factors up to a size of only 13x13. © 2015 IEEE.

  • 22.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Gonzalez-Sosa, Ester
    Nokia Bell-Labs, Madrid, Spain.
    A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 6519-6544Article in journal (Refereed)
    Abstract [en]

    The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches need thus to incorporate specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an Eigen-patches reconstruction method based on PCA Eigentransformation of local image patches. The structure of the iris is exploited by building a patch-position dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is among the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators, which were used to carry out biometric verification and identification experiments. Experimental results show that the proposed method significantly outperforms both bilinear and bicubic interpolation at very low-resolution. The performance of a number of comparators attain an impressive Equal Error Rate as low as 5%, and a Top-1 accuracy of 77-84% when considering iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching. © 2018, Emerald Publishing Limited.

  • 23.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris2019In: Selfie Biometrics / [ed] Ajita Rattani, Arun Ross, Springer, 2019Chapter in book (Refereed)
  • 24.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Learning-Based Local-Patch Resolution Reconstruction of Iris Smart-phone Images2017Conference paper (Refereed)
    Abstract [en]

    Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4-6% after the fusion of the two systems. © 2017 IEEE

  • 25.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion2016In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Piscataway: IEEE, 2016, article id 7791208Conference paper (Refereed)
    Abstract [en]

    Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

  • 26.
    Alonso-Fernandez, Fernando
    et al.
    University de Madrid, Madrid, Spain.
    Fierrez, J.
    Universidad Autonoma de Madrid.
    Ortega-Garcia, J.
    Universidad Autónoma de Madrid.
    Gonzalez-Rodriguez, J.
    Universidad Autónoma de Madrid.
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A Comparative Study of Fingerprint Image-Quality Estimation Methods2007In: IEEE Transactions on Information Forensics and Security, ISSN 1556-6013, E-ISSN 1556-6021, Vol. 2, no 4, p. 734-743Article in journal (Refereed)
    Abstract [en]

    One of the open issues in fingerprint verification is the lack of robustness against image-quality degradation. Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system. Therefore, it is important for a fingerprint recognition system to estimate the quality and validity of the captured fingerprint images. In this work, we review existing approaches for fingerprint image-quality estimation, including the rationale behind the published measures and visual examples showing their behavior under different quality conditions. We have also tested a selection of fingerprint image-quality estimation algorithms. For the experiments, we employ the BioSec multimodal baseline corpus, which includes 19 200 fingerprint images from 200 individuals acquired in two sessions with three different sensors. The behavior of the selected quality measures is compared, showing high correlation between them in most cases. The effect of low-quality samples in the verification performance is also studied for a widely available minutiae-based fingerprint matching system.

  • 27.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Measures in Biometric Systems2015In: Encyclopedia of Biometrics / [ed] Stan Z. Li & Anil K. Jain, New York: Springer Science+Business Media B.V., 2015, 2, p. 1287-1297Chapter in book (Refereed)
    Abstract [en]

    This is an excerpt from the content

    Synonyms

    Quality assessment; Biometric quality; Quality-based processing

    Definition

    Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms [1]. Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition [2].

    During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals [3]. This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks [4]. One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with at-a-distance and on-the-move acquisition capabilities. These will require robust algorithms capable of handling a range of changing characteristics [2]. Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.

    There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.

  • 28.
    Alonso-Fernandez, Fernando
    et al.
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fierrez-Aguilar, Julian
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Ortega-Garcia, Javier
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Gonzalez-Rodriguez, Joaquin
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Combining multiple matchers for fingerprint verification: A case study in biosecure network of excellence2007In: Annales des télécommunications, ISSN 0003-4347, E-ISSN 1958-9395, Vol. 62, no 1-2, p. 62-82Article in journal (Refereed)
    Abstract [en]

    We report on experiments for the fingerprint modality conducted during the First BioSecure Residential Workshop. Two reference systems for fingerprint verification have been tested together with two additional non-reference systems. These systems follow different approaches of fingerprint processing and are discussed in detail. Fusion experiments involving different combinations of the available systems are presented. The experimental results show that the best recognition strategy involves both minutiae-based and correlation-based measurements. Regarding the fusion experiments, the best relative improvement is obtained when fusing systems that are based on heterogeneous strategies for feature extraction and/or matching. The best combinations of two/three/four systems always include the best individual systems whereas the best verification performance is obtained when combining all the available systems.

  • 29.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Compact Multi-scale Periocular Recognition Using SAFE Feature2016In: Proceedings - International Conference on Pattern Recognition, Washington: IEEE Communications Society, 2016, p. 1455-1460, article id 7899842Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided. © 2016 IEEE

  • 30.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Compact Multi-scale Periocular Recognition Using SAFE Features2017In: Proceedings of the 23rd International Conference On Pattern Recognition (Icpr), IEEE Computer Society, 2017, p. 1455-1460, article id 7899842Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided. © 2016 IEEE.

  • 31.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Comparison and Fusion of Multiple Iris and Periocular Matchers Using Near-Infrared and Visible Images2015In: 3rd International Workshop on Biometrics and Forensics, IWBF 2015, Piscataway, NJ: IEEE Press, 2015, p. Article number: 7110234-Conference paper (Refereed)
    Abstract [en]

    Periocular refers to the facial region in the eye vicinity. It can be easily obtained with existing face and iris setups, and it appears in iris images, so its fusion with the iris texture has a potential to improve the overall recognition. It is also suggested that iris is more suited to near-infrared (NIR) illu- mination, whereas the periocular modality is best for visible (VW) illumination. Here, we evaluate three periocular and three iris matchers based on different features. As experimen- tal data, we use five databases, three acquired with a close-up NIR camera, and two in VW light with a webcam and a dig- ital camera. We observe that the iris matchers perform better than the periocular matchers with NIR data, and the opposite with VW data. However, in both cases, their fusion can pro- vide additional performance improvements. This is specially relevant with VW data, where the iris matchers perform sig- nificantly worse (due to low resolution), but they are still able to complement the periocular modality. © 2015 IEEE.

  • 32.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Keypoint Description by Symmetry Assessment–Applications in BiometricsManuscript (preprint) (Other academic)
    Abstract [en]

    We present a model-based feature extractor to describe neighborhoods around keypoints by finite expansion, estimating the spatially varying orientation by harmonic functions. The iso-curves of such functions are highly symmetric w.r.t. the origin (a keypoint) and the estimated parameters have well defined geometric interpretations. The origin is also a unique singularity of all harmonic functions, helping to determine thel ocation of a keypoint precisely, whereas the functions describe the object shape of the neighborhood. This is novel and complementary to traditional texture features which describe texture shape properties i.e. they are purposively invariant to translation (within a texture). We report on experiments of verification and identification of keypoints in forensic fingerprints by using publicly available data (NIST SD27), and discuss the results in comparison to other studies. These support our conclusions that the novel features can equip single cores or single minutia with a significant verification power at 19% EER, and an identification power of 24-78% for ranks of 1-20. Additionally, we report verification results of periocular biometrics using near infrared images, reaching an EER performance of 13%, whichis comparable to the state of the art. More importantly, fusion of two systems, our and texture features (Gabor), result in a measurable performance improvement. We report reduction ofthe EER to 9%, supporting the view that the novel features capture relevant visual

  • 33.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Raja, Kiran B.
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Busch, Christoph
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone Periocular Recognition2017In: 2017 25th European Signal Processing Conference (EUSIPCO), Piscataway: IEEE, 2017, p. 281-285, article id 8081211Conference paper (Refereed)
    Abstract [en]

    The proliferation of cameras and personal devices results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop when images from heterogeneous environments are compared. However, many applications require to deal with data from different sources regularly, thus needing to overcome these interoperability problems. Here, we employ fusion of several comparators to improve periocular performance when images from different smartphones are compared. We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios, obtaining a reduction in cross-sensor EER of up to 40% due to the fusion. Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and crosssensor score distributions are aligned and mapped to a common probabilistic domain. This allows the use of Bayes thresholds for optimal decision making, eliminating the need of sensor-specific thresholds, which is essential in operational conditions because the threshold setting critically determines the accuracy of the authentication process in many applications. © EURASIP 2017

  • 34.
    Assabie, Yaregal
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    A comprehensive Dataset for Ethiopic Handwriting Recognition2009In: Proceedings SSBA '09: Symposium on Image Analysis, Halmstad University, Halmstad, March 18-20, 2009 / [ed] Josef Bigun & Antanas Verikas, Halmstad: Halmstad University , 2009, p. 41-43Chapter in book (Other academic)
    Abstract [en]

    Ethiopic script is used by several languages in Ethiopia for writing. We present a comprehensive dataset of handwritten Ethiopic script called DEHR (Dataset for Ethiopic Handwriting Recognition) captured both offline and online. The offline dataset includes isolated characters, Ethiopian church documents and ordinary handwritten texts dealing with various real-life issues. The ordinary texts and isolated characters were freely written by several participants. The church documents are written in Geez and Amharic languages whereas the language for ordinary texts is Amharic only. The online dataset was collected by using two Digimemo devices of different sizes. For isolated characters and online dataset, all the 265 character samples used by Amharic language are included. The dataset is intended to set a benchmark for training and/or testing handwriting recognition, character and word segmentation, and text line detection. The dataset is can be accessed by contacting the authors or via http://www.hh.se/staff/josef/.

  • 35.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A Hybrid System for Robust Recognition of Ethiopic Script2007In: Ninth International Conference on Document Analysis and Recognition: proceedings : Curtiba, Paraná, Brazil, September 23-26, 2007 / [ed] IEEE Computer Society, Los Alamitos, Calif.: IEEE Computer Society, 2007, p. 556-560Conference paper (Refereed)
    Abstract [en]

    In real life, documents contain several font types, styles, and sizes. However, many character recognition systems show good results for specific type of documents and fail to produce satisfactory results for others. Over the past decades, various pattern recognition techniques have been applied with the aim to develop recognition systems insensitive to variations in the characteristics of documents. In this paper, we present a robust recognition system for Ethiopic script using a hybrid of classifiers. The complex structures of Ethiopic characters are structurally and syntactically analyzed, and represented as a pattern of simpler graphical units called primitives. The pattern is used for classification of characters using similarity-based matching and neural network classifier. The classification result is further refined by using template matching. A pair of directional filters is used for creating templates and extracting structural features. The recognition system is tested by real life documents and experimental results are reported.

  • 36.
    Assabie, Yaregal
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    A neural network approach for multifont and size-independent recognition of ethiopic characters2007In: Progress in pattern recognition / [ed] Singh, S, Singh, M, Springer London, 2007, p. 129-137Conference paper (Refereed)
    Abstract [en]

    Artificial neural networks are one of the most commonly used tools for character recognition problems, and usually they take gray values of 2D character images as inputs. In this paper, we propose a novel neural network classifier whose input is ID string patterns generated from the spatial relationships of primitive structures of Ethiopiccharacters. The spatial relationships of primitives are modeled by a special tree structure from which a unique set of string patterns are generated for each character. Training theneural network with string patterns of different font types and styles enables the classifier to handle variations in font types, sizes, and styles. We use a pair of directional filters forextracting primitives and their spatial relationships. The robustness of the proposed recognition system is tested by real life documents and experimental results are reported.

  • 37.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Ethiopic Character Recognition Using Direction Field Tensor2006In: The 18th International Conference on Pattern Recognition: proceedings : 20-24 August, 2006, Hong Kong, Los Alamitos, Calif.: IEEE Computer Society, 2006, p. 284-287Conference paper (Refereed)
    Abstract [en]

    Many languages in Ethiopia use a unique alphabet called Ethiopic for writing. However, there is no OCR system developed to date. In an effort to develop automatic recognition of Ethiopic script, a novel system is designed by applying structural and syntactic techniques. The recognition system is developed by extracting primitive structural features and their spatial relationships. A special tree structure is used to represent the spatial relationship of primitive structures. For each character, a unique string pattern is generated from the tree and recognition is achieved by matching the string against a stored knowledge base of the alphabet. To implement the recognition system, we use direction field tensor as a tool for character segmentation, and extraction of structural features and their spatial relationships. Experimental results are reported.

  • 38.
    Assabie, Yaregal
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Ethiopic Document Image Database for Testing Character Recognition Systems2006Report (Other academic)
    Abstract [en]

    In this paper we describe the acquisition and content of a large database of Ethiopic documents for testing and evaluating character recognition systems. The Ethiopic Document Image Database (EDIDB) contains documents written in Amharic and Geez languages. The database was built from a variety of documents such as printouts, books, newspapers, and magazines. Documents written in various font types, sizes and styles were included in the database. Degraded and poor quality documents were also included in the database to represent the real life situation. A total of 1,204 pages were scanned at a resolution of 300 dpi and saved as grayscale images of JPEG format. We also describe an evaluation protocol for standardizing the comparison of recognition systems and their results. The database is made available to the research community through http://www.hh.se/staff/josef/.

  • 39.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa Ethiopia.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    HMM-Based Handwritten Amharic Word Recognition with Feature Concatenation2009In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, New York: IEEE Press, 2009, p. 961-965Conference paper (Refereed)
    Abstract [en]

    Amharic is the official language of Ethiopia and uses Ethiopic script for writing. In this paper, we present writer-independent HMM-based Amharic word recognition for offline handwritten text. The underlying units of the recognition system are a set of primitive strokes whose combinations form handwritten Ethiopic characters. For each character, possibly occurring sequences of primitive strokes and their spatial relationships, collectively termed as primitive structural features, are stored as feature list. Hidden Markov models for Amharic words are trained with such sequences of structural features of characters constituting words. The recognition phase does not require segmentation of characters but only requires text line detection and extraction of structural features in each text line. Text lines and primitive structural features are extracted by making use of direction field tensor. The performance of the recognition system is tested by a database of unconstrained handwritten documents collected from various sources.

  • 40.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Lexicon-based Offline Recognition of Amharic Words in Unconstrained Handwritten Text2008In: 19th International Conference on Pattern Recognition: (ICPR 2008) ; Tampa, Florida, USA 8-11 December 2008, New York: IEEE Computer Society, 2008, article id 4761145Conference paper (Refereed)
    Abstract [en]

    This paper describes an offline handwriting recognition system for Amharic words based on lexicon. The system computes direction fields of scanned handwritten documents, from which pseudo-characters are segmented. The pseudo-characters are organized based on their proximity and direction to form text lines. Words are then segmented by analyzing the relative gap between subsequent pseudocharacters in text lines. For each segmented word image, the structural characteristics of pseudo-characters are syntactically analyzed to predict a set of plausible characters forming the word. The most likelihood word is finally selected among candidates by matching against the lexicon. The system is tested by a database of unconstrained handwritten Amharic documents collected from various sources. The lexicon is prepared from words appearing in the collected database.

  • 41.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Multifont size-resilient recognition system for Ethiopic script2007In: International Journal on Document Analysis and Recognition, ISSN 1433-2833, E-ISSN 1433-2825, Vol. 10, no 2, p. 85-100Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel framework for recognition of Ethiopic characters using structural and syntactic techniques. Graphically complex characters are represented by the spatial relationships of less complex primitives which form a unique set of patterns for each character. The spatial relationship is represented by a special tree structure which is also used to generate string patterns of primitives. Recognition is then achieved by matching the generated string pattern against each pattern in the alphabet knowledge-base built for this purpose. The recognition system tolerates variations on the parameters of characters like font type, size and style. Direction field tensor is used as a tool to extract structural features.

  • 42.
    Assabie, Yaregal
    et al.
    Department of Computer Science, Addis Ababa University, Ethiopia.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Offline handwritten Amharic word recognition2011In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 32, no 8, p. 1089-1099Article in journal (Refereed)
    Abstract [en]

    This paper describes two approaches for Amharic word recognition in unconstrained handwritten text using HMMs. The first approach builds word models from concatenated features of constituent characters and in the second method HMMs of constituent characters are concatenated to form word model. In both cases, the features used for training and recognition are a set of primitive strokes and their spatial relationships. The recognition system does not require segmentation of characters but requires text line detection and extraction of structural features, which is done by making use of direction field tensor. The performance of the recognition system is tested by a dataset of unconstrained handwritten documents collected from various sources, and promising results are obtained. (C) 2011 Elsevier B.V. All rights reserved.

  • 43.
    Assabie, Yaregal
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Offline Handwritten Amharic Word Recognition Using HMMs2009In: Proceedings SSBA '09: Symposium on Image Analysis, Halmstad University, Halmstad, March 18-20, 2009 / [ed] Josef Bigun & Antanas Verikas, Halmstad: Halmstad University , 2009, p. 89-92Chapter in book (Other academic)
    Abstract [en]

    This paper describes two appraches for Amharic word recognition in unconstrained handwritten text using HMMs. The first approach builds word models from concatenated features of constituent characters and in the second method HMMs of constituent characters are concatenated to form word model. In both cases, the features used for training and recognition are primitive strokes and their spatial relationships. The recognition system does not require segmentation of characters but requires text line detection and extraction of structural features, which is done by making use of direction field tensor. The performance of the recognition system is tested by DEHR dataset of unconstrained handwritten documents collected from various sources.

  • 44.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Online Handwriting Recognition of Ethiopic Script2008In: Proceedings: Eleventh International Conference on Frontiers in Handwriting Recognition, Montréal, Québec - Canada, August 19-21, 2008 / [ed] Ching Y Suen, Montréal: CENPARMI, Concordia University , 2008, p. 153-158Conference paper (Refereed)
    Abstract [en]

    Online recognition of handwritten characters is gaining a renewed interest as it provides a natural way of data entry for a wide variety of handheld devices. In this paper, we present online handwriting recognition system for Ethiopic script based on the structural and syntactical analysis of the strokes forming characters. The complex structures of characters are represented by the spatio- temporal relationships of simple-shaped strokes called primitives. A special tree structure is used to model spatio- temporal relationships of the strokes. The tree generates a unique set of primitive stroke sequences for each character, and for recognition each stroke sequence is matched against a stored knowledge base. Characters are also classified based on their structural similarity to select a plausible set of characters for un unknown input, which improves recognition and processing time. We also present a dataset collected for training and testing online recognition systems for Ethiopic script. The dataset is prepared in accordance with the international standard UNIPEN format. The recognition system is tested with the collected dataset and experimental results are reported.

  • 45.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Structural and Syntactic Techniques for Recognition of Ethiopic Characters2006In: Structural, syntactic, and statistical pattern recognition joint IAPR international workshops SSPR 2006 and SPR 2006, Hong Kong, China, August 17-19, 2006 : proceedings: Lecture Notes in Computer Sciences (Volume 4109/2006), Berlin: Springer Berlin/Heidelberg, 2006, p. 118-126Conference paper (Refereed)
    Abstract [en]

    OCR technology of Latin scripts is well advanced in comparison to other scripts. However, the available results from Latin are not always sufficient to directly adopt them for other scripts such as the Ethiopic script. In this paper, we propose a novel approach that uses structural and syntactic techniques for recognition of Ethiopic characters. We reveal that primitive structures and their spatial relationships form a unique set of patterns for each character. The relationships of primitives are represented by a special tree structure, which is also used to generate a pattern. A knowledge base of the alphabet that stores possibly occurring patterns for each character is built. Recognition is then achieved by matching the generated pattern against each pattern in the knowledge base. Structural features are extracted using direction field tensor. Experimental results are reported, and the recognition system is insensitive to variations on font types, sizes and styles.

  • 46.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa Ethiopia.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Writer-independent Offline Recognition of Handwritten Ethiopic Characters2008In: Proceedings: Eleventh International Conference on Frontiers in Handwriting Recognition, Montréal, Québec - Canada, August 19-21, 2008 / [ed] Ching Y Suen, Montréal: CENPARMI, Concordia University , 2008, p. 652-657Conference paper (Refereed)
    Abstract [en]

    This paper presents writer-independent offline handwritten character recognition for Ethiopic script. The recognition is based on the characteristics of primitive strokes that make up characters. The spatial relationships of primitives whose combinations form complex structures of Ethiopic characters are used as a basis for recognition. Although this approach efficiently recognizes properly written characters, the recognition rate drops for characters where the spatial relationships of their primitives could not be drawn. This happens mostly when the connections between primitives are not properly written, which is a common case in handwriting. To complement the recognition, we classify characters based on the characteristics of their primitives, resulting in grouping of characters in a five-dimensional space. Once the type of characters is identified, recognition can be achieved with a minimal set of information from their spatial relationships. A comprehensive database is also developed to standardize the evaluation of research works on offline Ethiopic handwriting recognition systems. Our proposed system is tested is with the database and experimental results are reported.

  • 47.
    Bhanu, Bir
    et al.
    University of California at Riverside, USA.
    Ratha, Nalini K.
    IBM T.J. Watson Research Center, USA.
    Kumar, Vijay
    Carnegie Mellon University, USA.
    Chellappa, Rama
    University of Maryland.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Guest Editorial: Special Issue on Human Detection and Recognition2007In: IEEE Transactions on Information Forensics and Security, ISSN 1556-6013, E-ISSN 1556-6021, Vol. 2, no 3 part 2, p. 489-490Article in journal (Refereed)
    Abstract [en]

    The 12 regular papers and three correspondences in this special issue focus on human detection and recognition. The papers represent gait, face (3-D, 2-D, video), iris, palmprint, cardiac sounds, and vulnerability of biometrics and protection against the spoof attacks.

  • 48.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Fingerprint features2009In: Encyclopedia of biometrics / [ed] Stan Z. Li, New York: Springer-Verlag New York, 2009, p. 465-473Chapter in book (Other academic)
  • 49.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Multiple experts2009In: Encyclopedia of biometrics / [ed] Stan Z. Li & Anil Jain, Springer, 2009, p. 986-993Chapter in book (Other academic)
  • 50.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision2006Book (Refereed)
    Abstract [en]

    Presents a systematic, mathematically rigorous examination of modern signal processing concepts used in computer vision and image analysis. This book is illustrated with 4-color graphics and applications, including biometric person authentication, texture analysis, optical character recognition, motion estimation and tracking.

123 1 - 50 of 146
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf