Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Neural Ctrl-F: Segmentation-free query-by-string word spotting in handwritten manuscript collections
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.ORCID iD: 0000-0002-6783-1744
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of History.ORCID iD: 0000-0002-5245-937X
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.ORCID iD: 0000-0002-4405-6888
2017 (English)In: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, p. 4443-4452Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we approach the problem of segmentation-free query-by-string word spotting for handwritten documents. In other words, we use methods inspired from computer vision and machine learning to search for words in large collections of digitized manuscripts. In particular, we are interested in historical handwritten texts, which are often far more challenging than modern printed documents. This task is important, as it provides people with a way to quickly find what they are looking for in large collections that are tedious and difficult to read manually. To this end, we introduce an end-to-end trainable model based on deep neural networks that we call Ctrl-F-Net. Given a full manuscript page, the model simultaneously generates region proposals, and embeds these into a distributed word embedding space, where searches are performed. We evaluate the model on common benchmarks for handwritten word spotting, outperforming the previous state-of-the-art segmentation-free approaches by a large margin, and in some cases even segmentation-based approaches. One interesting real-life application of our approach is to help historians to find and count specific words in court records that are related to women's sustenance activities and division of labor. We provide promising preliminary experiments that validate our method on this task.

Place, publisher, year, edition, pages
IEEE, 2017. p. 4443-4452
Series
IEEE International Conference on Computer Vision, E-ISSN 1550-5499
Keywords [en]
Segmentation-free Word Spotting, Deep Learning, Convolutional Neural Network, Query-by-String
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Processing
Identifiers
URN: urn:nbn:se:uu:diva-335926DOI: 10.1109/ICCV.2017.475ISI: 000425498404054ISBN: 978-1-5386-1032-9 (electronic)OAI: oai:DiVA.org:uu-335926DiVA, id: diva2:1164427
Conference
16th IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October 22-29, 2017
Projects
q2b
Funder
Swedish Research Council, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1Available from: 2017-12-11 Created: 2017-12-11 Last updated: 2019-04-08Bibliographically approved
In thesis
1. Learning based Word Search and Visualisation for Historical Manuscript Images
Open this publication in new window or tab >>Learning based Word Search and Visualisation for Historical Manuscript Images
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Today, work with historical manuscripts is nearly exclusively done manually, by researchers in the humanities as well as laypeople mapping out their personal genealogy. This is a highly time consuming endeavour as it is not uncommon to spend months with the same volume of a few hundred pages. The last few decades have seen an ongoing effort to digitise manuscripts, both preservation purposes and to increase accessibility. This has the added effect of enabling the use methods and algorithms from Image Analysis and Machine Learning that have great potential in both making existing work more efficient and creating new methodologies for manuscript-based research.

The first part of this thesis focuses on Word Spotting, the task of searching for a given text query in a manuscript collection. This can be broken down into two tasks, detecting where the words are located on the page, and then ranking the words according to their similarity to a search query. We propose Deep Learning models to do both, separately and then simultaneously, and successfully search through a large manuscript collection consisting of over a hundred thousand pages.

A limiting factor in applying learning-based methods to historical manuscript images is the cost, and therefore, lack of annotated data needed to train machine learning models. We propose several ways to mitigate this problem, including generating synthetic data, augmenting existing data to get better value from it, and learning from pre-existing, partially annotated data that was previously unusable.

In the second part, a method for visualising manuscript collections called the Image-based Word Cloud is proposed. Much like it text-based counterpart, it arranges the most representative words in a collection into a cloud, where the size of the words are proportional to their frequency of occurrence. This grants a user a single image overview of a manuscript collection, regardless of its size. We further propose a way to estimate a manuscripts production date. This can grant historians context that is crucial for correctly interpreting the contents of a manuscript.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2019. p. 82
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 1798
Keywords
Word Spotting, Convolutional Neural Networks, Deep Learning, Region Proposals, Historical Manuscripts, Computer Vision, Image Analysis, Visualisation, Document Analysis
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-381308 (URN)978-91-513-0633-9 (ISBN)
Public defence
2019-06-04, TLS (Tidskriftläsesalen), Carolina Rediviva, Dag Hammarskjölds väg 1, Uppsala, 10:15 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
Available from: 2019-05-13 Created: 2019-04-08 Last updated: 2019-06-18

Open Access in DiVA

fulltext(9742 kB)197 downloads
File information
File name FULLTEXT01.pdfFile size 9742 kBChecksum SHA-512
6815bb2fc88742034c79f612c67cc17f0812111b20b79da6a18a601c5917b65dd2d887655762f98a2e90a02c5663717b12174bc4907f133fcd7b4ca183a06332
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Search in DiVA

By author/editor
Wilkinson, TomasLindström, JonasBrun, Anders
By organisation
Division of Visual Information and InteractionComputerized Image Analysis and Human-Computer InteractionDepartment of History
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 197 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 239 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf