Isometric Words and Edit Distance: Main Notions and New Variations

AUTHORS: G.Castiglione, M. Flores, D. Giammarresi

URL: https://link.springer.com/chapter/10.1007/978-3-031-42250-8_1

Work Package : Work Package 7 – REVER

Keywords: Isometric words, Edit distance, Generalized Fibonacci cubes

Abstract
Isometric words combine the notion of edit distance together with properties of words not appearing as factors in other words. An edit distance is a metric between words that quantifies how two words differ by counting the number of edit operations needed to transform one word into the other one. A word f is said isometric with respect to an edit distance if, for any pair of f-free words u and v, there exists a transformation of minimal length from u into v via the related edit operations such that all the intermediate words are also f-free. The adjective “isometric” comes from the fact that, if the Hamming distance is considered (i.e., only replacement operations are used), then isometric words are connected with the definitions of isometric subgraphs of hypercubes. We discuss known results and some interesting generalizations and open problems.




Hypercubes and IsometricWords Based on Swap and Mismatch Distance

AUTHORS: M. Anselmo, G.Castiglione, M. Flores, D. Giammarresi, M. Madonia, S. Mantaci

URL: https://link.springer.com/chapter/10.1007/978-3-031-34326-1_2

Work Package : Work Package 7 – REVER

Keywords: Swap and mismatch distance, Isometric words, Hypercube

Abstract
The hypercube of dimension n is the graph whose vertices are the 2nbinary words of length n, and there is an edge between two of them if they have Hamming distance 1. We consider an edit distance based on swaps and mismatches, to which we refer as tilde-distance, and define the tilde-hypercube with edges linking words at tilde-distance 1. Then, we introduce and study some isometric subgraphs of the tilde-hypercube obtained by using special words called tilde-isometric words. The subgraphs keep only the vertices that avoid a given tilde-isometric word as a factor. An infinite family of tilde-isometric words is described; they are isometric with respect to the tilde-distance, but not to the Hamming distance. In the case of word 11, the subgraph is called tilde-Fibonacci cube, as a generalization of the classical Fibonacci cube. The tilde-hypercube and the tilde-Fibonacci cube can be recursively defined; the same holds for the number of their edges. This allows an asymptotic estimation of the number of edges in the tilde-Fibonacci cube, in comparison to the total number in the tilde-hypercube.




IsometricWords Based on Swap and Mismatch Distance

AUTHORS: M. Anselmo, G.Castiglione, M. Flores, D. Giammarresi, M. Madonia, S. Mantaci

URL: https://link.springer.com/chapter/10.1007/978-3-031-33264-7_3

Work Package : Work Package 7 – REVER

Keywords: Swap and mismatch distance, Isometric words, Overlap with errors

Abstract
An edit distance is a metric between words that quantifies how two words differ by counting the number of edit operations needed to transform one word into the other one. A word f is said isometric with respect to an edit distance if, for any pair of f-free words u and v, there exists a transformation of minimal length from u to v via the related edit operations such that all the intermediate words are also f-free. The adjective “isometric” comes from the fact that, if the Hamming distance is considered (i.e., only mismatches), then isometric words define some isometric subgraphs of hypercubes. We consider the case of edit distance with swap and mismatch. We compare it with the case of mismatch only and prove some properties of isometric words that are related to particular features of their overlaps.




Measuring fairness under unawareness of sensitive attributes: A quantification-based approach

AUTHORS: A. Fabris, A. Esuli, A. Moreo, F. Sebastiani

URL: https://doi.org/10.1613/jair.1.14033

Work Package : All ITSERR WPs using FAIR data

Keywords: Algorithms, Models, Decision Making, Group Fairness, Demographic Attributes, Data Minimisation, Privacy, Fairness Measurement, Sensitive Attributes, Quantification, Supervised Learning, Prevalence Estimates, Distribution Shifts, Demographic Parity, Classifier Fairness

Abstract
Algorithms and models are increasingly deployed to inform decisions about people, inevitably affecting their lives. As a consequence, those in charge of developing these models must carefully evaluate their impact on different groups of people and favour group fairness, that is, ensure that groups determined by sensitive demographic attributes, such as race or sex, are not treated unjustly. To achieve this goal, the availability (awareness) of these demographic attributes to those evaluating the impact of these models is fundamental. Unfortunately, collecting and storing these attributes is often in conflict with industry practices and legislation on data minimisation and privacy. For this reason, it can be hard to measure the group fairness of trained models, even from within the companies developing them. In this work, we tackle the problem of measuring group fairness under unawareness of sensitive attributes, by using techniques from quantification, a supervised learning task concerned with directly providing group-level prevalence estimates (rather than individual-level class labels). We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem, as they are robust to inevitable distribution shifts while at the same time decoupling the (desirable) objective of measuring group fairness from the (undesirable) side effect of allowing the inference of sensitive attributes of individuals. More in detail, we show that fairness under unawareness can be cast as a quantification problem and solved with proven methods from the quantification literature. We show that these methods outperform previous approaches to measure demographic parity in five experimental protocols, corresponding to important challenges that complicate the estimation of classifier fairness under unawareness.




Volumetric Fast Fourier Convolution for Detecting Ink on the Carbonized Herculaneum Papyri

AUTHORS: Fabio Quattrini, R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://openaccess.thecvf.com/content/ICCV2023W/e-Heritage/papers/Quattrini_Volumetric_Fast_Fourier_Convolution_for_Detecting_Ink_on_the_Carbonized_ICCVW_2023_paper.pdf

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: Digital Document Restoration, Virtual Unwrapping, Herculaneum Papyri, Ink Detection, Computer Vision, X-ray Micro-Computed Tomography, Artificial Intelligence, Volumetric Data, Fast Fourier Convolutions, Carbon-based Ink

Abstract
Recent advancements in Digital Document Restoration (DDR) have led to significant breakthroughs in analyzing highly damaged written artifacts. Among those, there has been an increasing interest in applying Artificial Intelligence techniques for virtually unwrapping and automatically detecting ink on the Herculaneum papyri collection. This collection consists of carbonized scrolls and fragments of documents, which have been digitized via X-ray tomography to allow the development of ad-hoc deep learningbased DDR solutions. In this work, we propose a modification of the Fast Fourier Convolution operator for volumetric data and apply it in a segmentation architecture for ink detection on the challenging Herculaneum papyri, demonstrating its suitability via deep experimental analysis. To encourage the research on this task and the application of the proposed operator to other tasks involving volumetric data, we will release our implementation (https://github.com/aimagelab/vffc).




How to Choose Pretrained Handwriting Recognition Models for Single Writer Fine-Tuning

AUTHORS: Vittorio Pippi Silvia Cascianelli Christopher Kermorvant Rita Cucchiara

URL: https://link.springer.com/chapter/10.1007/978-3-031-41679-8_19

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: Document synthesis, Historical document analysis, Handwriting recognition, Synthetic data

Abstract
Recent advancements in Deep Learning-based Handwritten Text Recognition (HTR) have led to models with remarkable performance on both modern and historical manuscripts in large benchmark datasets. Nonetheless, those models struggle to obtain the same performance when applied to manuscripts with peculiar characteristics, such as language, paper support, ink, and author handwriting. This issue is very relevant for valuable but small collections of documents preserved in historical archives, for which obtaining sufficient annotated training data is costly or, in some cases, unfeasible. To overcome this challenge, a possible solution is to pretrain HTR models on large datasets and then fine-tune them on small single-author collections. In this paper, we take into account large, real benchmark datasets and synthetic ones obtained with a styled Handwritten Text Generation model. Through extensive experimental analysis, also considering the amount of fine-tuning lines, we give a quantitative indication of the most relevant characteristics of such data for obtaining an HTR model able to effectively transcribe manuscripts in small collections with as little as five real fine-tuning lines.




Handwritten Text Generation from Visual Archetypes

AUTHORS: R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://ceur-ws.org/Vol-3536/03_paper.pdf

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: HTG, Text Generation, Characters, Visual Archetypes, Transformer, Calligraphic, GANs, Encoding, Training, Synthetic

Abstract
Generating synthetic images of handwritten text in a writer-specific style is a challenging task, especially in the case of unseen styles and new words, and even more when these latter contain characters that are rarely encountered during training. While emulating a writer’s style has been recently addressed by generative models, the generalization towards rare characters has been disregarded. In this work, we devise a Transformer-based model for Few-Shot styled handwritten text generation and focus on obtaining a robust and informative representation of both the text and the style. In particular, we propose a novel representation of the textual content as a sequence of dense vectors obtained from images of symbols written as standard GNU Unifont glyphs, which can be considered their visual archetypes. This strategy is more suitable for generating characters that, despite having been seen rarely during training, possibly share visual details with the frequently observed ones. As for the style, we obtain a robust representation of unseen writers’ calligraphy by exploiting specific pre-training on a large synthetic dataset. Quantitative and qualitative results demonstrate the effectiveness of our proposal in generating words in unseen styles and with rare characters more faithfully than existing approaches relying on independent one-hot encodings of the characters.




Bridging Islamic Knowledge and AI: Inquiring ChatGPT on Possible Categorizations for an Islamic Digital Library (full paper)

AUTHORS: A. El Ganadi, R. A. Vigliermo, L. Sala, M. Vanzini, F. Ruozzi, F. Ruozzi, S. Bergamaschi

URL: https://ceur-ws.org/Vol-3536/03_paper.pdf

Work Package : WP5

Keywords: Libraries and Archives in CH, Digital Libraries and Religious Archives, ChatGPT, Islamic studies, Arabic script languages, Islamic knowledge classification, Islamic subjects

Abstract
This research evaluates the capabilities of ChatGPT in assisting with the categorization of an Islamic digital library exploiting incremental Machine Learning and Transfer Learning techniques. Noticeably, ChatGPT showcased a remarkable familiarity with Islamic knowledge, evident in its ability to classify subjects hierarchically based on their importance, from Qur’anic Studies to Modern Islamic Thought. The library aimed to cater to a diverse Arabic Islamic audience with collections sourced from varied digital donations. Despite ChatGPT’s commendable proficiency, challenges arose. In light of ChatGPT’s significant performance, several challenges arose, with interpretability, generalization, and the hallucination issue standing out as the most critical obstacles.




Knowledge extraction, management and long-term preservation of non-Latin cultural heritages-Digital Maktaba project presentation

AUTHORS: S. Bergamaschi, R. Martoglia, F. Ruozzi, R. A. Vigliermo, L. Sala, M. Vanzini

URL: https://ceur-ws.org/Vol-3365/short11.pdf

Work Package : WP5

Keywords: Cultural heritages, Non-Latin alphabets, Knowledge extraction, Machine Learning, Natural Language Processing, Big data management, Long-term preservation, Big data integration, Named Entity Recognition

Abstract
The services provided by today’s cutting-edge digital library systems may benefit from new technologies that can improve cataloguing efficiency and cultural heritages preservation and accessibility. Below, we introduce the recently started Digital Maktaba (DM) project, which suggests a new model for the knowledge extraction and semi-automatic cataloguing task in the context of digital libraries that contain documents in non-Latin scripts (e.g. Arabic). Since DM involves a large amount of unorganized data from several sources, particular emphasis will be placed on topics such as big data integration, big data analysis and long-term preservation. This project aims to create an innovative workflow for the automatic extraction of information and metadata and for a semi-automated cataloguing process by exploiting Machine Learning, Natural Language Processing, Artificial Intelligence and data management techniques to provide a system that is capable of speeding up, enhancing and supporting the librarian’s work. We also report on some promising results that we obtained through a preliminary proof of concept experimentation. (Short paper, discussion paper)




Knowledge Extraction and Cross-Language Data Integration in Digital Libraries

AUTHORS: L. Sala

URL: https://ceur-ws.org/Vol-3478/paper17.pdf

Work Package : WP5

Keywords: Data Integration, Cross-Language Record Linkage, Knowledge Extraction, Long-term Preservation

Abstract
Digital Humanities (DH) is an interdisciplinary field that has grown rapidly in recent years, requiring the creation of an efficient and uniform platform capable of managing various types of data in several languages. This paper presents the research objectives and methodologies of my PhD project: the creation of a novel framework for Knowledge Extraction and Multilingual Data Integration in the context of digital libraries in non-Latin languages, in particular Arabic, Persian and Azerbaijani. The research began with the Digital Maktaba (DM) project and continued within the PNRR ITSERR infrastructure, in which the DBGroup1 participates. The project aims to develop a two-component framework consisting of a Knowledge Extraction Subsystem and a Data Integration Subsystem. The case study is based on the DM project, which seeks to create a flexible and efficient digital library for preserving and analyzing multicultural heritage documents by exploiting the available and ad-hoc created datasets, Explainable Machine Learning , Natural Language Processing (NLP) technologies and Data Integration approaches. Key challenges and future developments in Knowledge Extraction and Data Integration are examined, which involve leveraging the MOMIS system for Data Integration tasks and adopting a microservices-based architecture for the effective implementation of the system. The goal is to provide a versatile platform for organizing and integrating various data sources and languages, thereby fostering a more inclusive and accessible global perspective on cultural and historical artefacts that encourage collaboration in building an expanding knowledge base.




A tool for semiautomatic cataloguing of an islamic digital library: a use case from the Digital Maktaba project (short paper)

AUTHORS: L. Sala, R. Martoglia, M. Vanzini, R. A. Vigliermo

URL: https://ceur-ws.org/Vol-3234/paper1.pdf

Work Package : WP5

Keywords: Cultural heritage, Digital Library, Islamic sciences, Arabic script OCR, Information extraction, Output alignment, Page layout analysis, Semiautomatic cataloguing, Software tool usage demo.

Abstract
Digital Maktaba (DM) is an interdisciplinary project to create a digital library of texts in non-Latin
alphabets (Arabic, Persian, Azerbaijani). The dataset is made available by the digital library heritage
of the ”La Pira” library in the history and doctrines of Islam based in Palermo, which is the hub of the
Foundation for Religious Sciences (FSCIRE, Bologna). Establishing protocols for the creation, maintenance
and cataloguing of historical content in non-Latin alphabets is the long-term goal of DM. The first step of
this project was to create an innovative workflow for automatic extraction of information and metadata
from title pages of Arabic script texts. The Optical Character Recognition (OCR) tool uses various
recognition systems, text processing techniques and corpora in order to provide accurate extraction and
metadata of document content. In this paper we address the ongoing development of this novel tool
and, for the first time, we present a demo of the current version that we have designed for the extraction
and cataloguing process by showing a use case on an Arabic book frontispiece. In particular, we delve
into the details of the tool workflow for automatically converting and uploading PDFs from the digital
library, for the automatic extraction of cataloguing metadata and the semiautomatic (at the current stage)
process of cataloguing. We also shortly discuss future prospects and the many additional features that
we are planning to develop.




Novel Perspectives for the Management of Multilingual and Multialphabetic Heritages through Automatic Knowledge Extraction: The DigitalMaktaba Approach

AUTHORS: S. Bergamaschi, R. Martoglia, F. Ruozzi, R. A. Vigliermo, L. Sala, M. Vanzini

URL: https://www.mdpi.com/1424-8220/22/11/3995

Work Package : WP5

Keywords: digital libraries; minority languages; humanistic informatics; computer archiving; intercultural communication

Abstract
The linguistic and social impact of multiculturalism can no longer be neglected in any sector, creating the urgent need of creating systems and procedures for managing and sharing cultural heritages in both supranational and multi-literate contexts. In order to achieve this goal, text sensing appears to be one of the most crucial research areas. The long-term objective of the DigitalMaktaba project, born from interdisciplinary collaboration between computer scientists, historians, librarians, engineers and linguists, is to establish procedures for the creation, management and cataloguing of archival heritage in non-Latin alphabets. In this paper, we discuss the currently ongoing design of an innovative workflow and tool in the area of text sensing, for the automatic extraction of knowledge and cataloguing of documents written in non-Latin languages (Arabic, Persian and Azerbaijani). The current prototype leverages different OCR, text processing and information extraction techniques in order to provide both a highly accurate extracted text and rich metadata content (including automatically identified cataloguing metadata), overcoming typical limitations of current state of the art approaches. The initial tests provide promising results. The paper includes a discussion of future steps (e.g., AI-based techniques further leveraging the extracted data/metadata and making the system learn from user feedback) and of the many foreseen advantages of this research, both from a technical and a broader cultural-preservation and sharing point of view.




Structured-Light Scanning and Metrological Analysis for Archaeology: Quality Assessment of Artec 3D Solutions for Cuneiform Tablets

AUTHORS: Filippo DIARA

URL: https://www.mdpi.com/2571-9408/6/9/317

Work Package: WP 9 – Taurus

Abstract
This paper deals with a metrological and qualitative evaluation of the Artec 3D structured-light scanners: Micro and Space Spider. As part of a larger European project called ITSERR, these scanners are tested to reconstruct small archaeological artefacts, in particular cuneiform tablets with different dimensions. For this reason, Micro and Space Spider are compared in terms of the entire workflow, from preparatory work to post-processing. In this context, three cuneiform replica tablets will serve as examples on which the Artec scanners will have to prove their worth. Metric analyses based on distance maps, RMSe calculations and density analyses will be carried out to understand metrological differences between these tools. The creation of 3D models of cuneiform tablets is the first step in developing a virtual environment suitable for sharing the archaeological collection with collaborators and other users. The inclusion of semantic information through specific ontologies will be the next step in this important project.




Preserving and conserving culture: first steps towards a knowledge extractor and cataloguer for multilingual and multi-alphabetic heritages

AUTHORS: S. Bergamaschi, R. Martoglia, F. Ruozzi, R. A. Vigliermo, L. Sala, M. Vanzini

URL: https://dl.acm.org/doi/abs/10.1145/3462203.3475927

Abstract
Managing and sharing cultural heritages also in supranational and multi-literate contexts is a very hot research topic. In this paper we discuss the research we are conducting in the DigitalMaktaba project, presenting the first steps for designing an innovative workflow and tool for the automatic extraction of knowledge from documents written in multiple non-Latin languages (Arabic, Persian and Azerbaijani languages). The tool leverages different OCR, text processing techniques and linguistic corpora in order to provide both a highly accurate extracted text and a rich metadata content, overcoming typical limitations of current state-of-the-art systems; this will enable in the near future the development of an automatic cataloguer which we hope will ultimately help in better preserving and conserving culture in such a demanding scenario.