Isometric Words and Edit Distance: Main Notions and New Variations

AUTHORS: G.Castiglione, M. Flores, D. Giammarresi

URL: https://link.springer.com/chapter/10.1007/978-3-031-42250-8_1

Work Package : Work Package 7 – REVER

Keywords: Isometric words, Edit distance, Generalized Fibonacci cubes

Abstract
Isometric words combine the notion of edit distance together with properties of words not appearing as factors in other words. An edit distance is a metric between words that quantifies how two words differ by counting the number of edit operations needed to transform one word into the other one. A word f is said isometric with respect to an edit distance if, for any pair of f-free words u and v, there exists a transformation of minimal length from u into v via the related edit operations such that all the intermediate words are also f-free. The adjective “isometric” comes from the fact that, if the Hamming distance is considered (i.e., only replacement operations are used), then isometric words are connected with the definitions of isometric subgraphs of hypercubes. We discuss known results and some interesting generalizations and open problems.




Hypercubes and IsometricWords Based on Swap and Mismatch Distance

AUTHORS: M. Anselmo, G.Castiglione, M. Flores, D. Giammarresi, M. Madonia, S. Mantaci

URL: https://link.springer.com/chapter/10.1007/978-3-031-34326-1_2

Work Package : Work Package 7 – REVER

Keywords: Swap and mismatch distance, Isometric words, Hypercube

Abstract
The hypercube of dimension n is the graph whose vertices are the 2nbinary words of length n, and there is an edge between two of them if they have Hamming distance 1. We consider an edit distance based on swaps and mismatches, to which we refer as tilde-distance, and define the tilde-hypercube with edges linking words at tilde-distance 1. Then, we introduce and study some isometric subgraphs of the tilde-hypercube obtained by using special words called tilde-isometric words. The subgraphs keep only the vertices that avoid a given tilde-isometric word as a factor. An infinite family of tilde-isometric words is described; they are isometric with respect to the tilde-distance, but not to the Hamming distance. In the case of word 11, the subgraph is called tilde-Fibonacci cube, as a generalization of the classical Fibonacci cube. The tilde-hypercube and the tilde-Fibonacci cube can be recursively defined; the same holds for the number of their edges. This allows an asymptotic estimation of the number of edges in the tilde-Fibonacci cube, in comparison to the total number in the tilde-hypercube.




IsometricWords Based on Swap and Mismatch Distance

AUTHORS: M. Anselmo, G.Castiglione, M. Flores, D. Giammarresi, M. Madonia, S. Mantaci

URL: https://link.springer.com/chapter/10.1007/978-3-031-33264-7_3

Work Package : Work Package 7 – REVER

Keywords: Swap and mismatch distance, Isometric words, Overlap with errors

Abstract
An edit distance is a metric between words that quantifies how two words differ by counting the number of edit operations needed to transform one word into the other one. A word f is said isometric with respect to an edit distance if, for any pair of f-free words u and v, there exists a transformation of minimal length from u to v via the related edit operations such that all the intermediate words are also f-free. The adjective “isometric” comes from the fact that, if the Hamming distance is considered (i.e., only mismatches), then isometric words define some isometric subgraphs of hypercubes. We consider the case of edit distance with swap and mismatch. We compare it with the case of mismatch only and prove some properties of isometric words that are related to particular features of their overlaps.




Measuring fairness under unawareness of sensitive attributes: A quantification-based approach

AUTHORS: A. Fabris, A. Esuli, A. Moreo, F. Sebastiani

URL: https://doi.org/10.1613/jair.1.14033

Work Package : All ITSERR WPs using FAIR data

Keywords: Algorithms, Models, Decision Making, Group Fairness, Demographic Attributes, Data Minimisation, Privacy, Fairness Measurement, Sensitive Attributes, Quantification, Supervised Learning, Prevalence Estimates, Distribution Shifts, Demographic Parity, Classifier Fairness

Abstract
Algorithms and models are increasingly deployed to inform decisions about people, inevitably affecting their lives. As a consequence, those in charge of developing these models must carefully evaluate their impact on different groups of people and favour group fairness, that is, ensure that groups determined by sensitive demographic attributes, such as race or sex, are not treated unjustly. To achieve this goal, the availability (awareness) of these demographic attributes to those evaluating the impact of these models is fundamental. Unfortunately, collecting and storing these attributes is often in conflict with industry practices and legislation on data minimisation and privacy. For this reason, it can be hard to measure the group fairness of trained models, even from within the companies developing them. In this work, we tackle the problem of measuring group fairness under unawareness of sensitive attributes, by using techniques from quantification, a supervised learning task concerned with directly providing group-level prevalence estimates (rather than individual-level class labels). We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem, as they are robust to inevitable distribution shifts while at the same time decoupling the (desirable) objective of measuring group fairness from the (undesirable) side effect of allowing the inference of sensitive attributes of individuals. More in detail, we show that fairness under unawareness can be cast as a quantification problem and solved with proven methods from the quantification literature. We show that these methods outperform previous approaches to measure demographic parity in five experimental protocols, corresponding to important challenges that complicate the estimation of classifier fairness under unawareness.




Volumetric Fast Fourier Convolution for Detecting Ink on the Carbonized Herculaneum Papyri

AUTHORS: Fabio Quattrini, R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://openaccess.thecvf.com/content/ICCV2023W/e-Heritage/papers/Quattrini_Volumetric_Fast_Fourier_Convolution_for_Detecting_Ink_on_the_Carbonized_ICCVW_2023_paper.pdf

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: Digital Document Restoration, Virtual Unwrapping, Herculaneum Papyri, Ink Detection, Computer Vision, X-ray Micro-Computed Tomography, Artificial Intelligence, Volumetric Data, Fast Fourier Convolutions, Carbon-based Ink

Abstract
Recent advancements in Digital Document Restoration (DDR) have led to significant breakthroughs in analyzing highly damaged written artifacts. Among those, there has been an increasing interest in applying Artificial Intelligence techniques for virtually unwrapping and automatically detecting ink on the Herculaneum papyri collection. This collection consists of carbonized scrolls and fragments of documents, which have been digitized via X-ray tomography to allow the development of ad-hoc deep learningbased DDR solutions. In this work, we propose a modification of the Fast Fourier Convolution operator for volumetric data and apply it in a segmentation architecture for ink detection on the challenging Herculaneum papyri, demonstrating its suitability via deep experimental analysis. To encourage the research on this task and the application of the proposed operator to other tasks involving volumetric data, we will release our implementation (https://github.com/aimagelab/vffc).




How to Choose Pretrained Handwriting Recognition Models for Single Writer Fine-Tuning

AUTHORS: R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://link.springer.com/chapter/10.1007/978-3-031-41679-8_19

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: Document synthesis, Historical document analysis, Handwriting recognition, Synthetic data

Abstract
Recent advancements in Deep Learning-based Handwritten Text Recognition (HTR) have led to models with remarkable performance on both modern and historical manuscripts in large benchmark datasets. Nonetheless, those models struggle to obtain the same performance when applied to manuscripts with peculiar characteristics, such as language, paper support, ink, and author handwriting. This issue is very relevant for valuable but small collections of documents preserved in historical archives, for which obtaining sufficient annotated training data is costly or, in some cases, unfeasible. To overcome this challenge, a possible solution is to pretrain HTR models on large datasets and then fine-tune them on small single-author collections. In this paper, we take into account large, real benchmark datasets and synthetic ones obtained with a styled Handwritten Text Generation model. Through extensive experimental analysis, also considering the amount of fine-tuning lines, we give a quantitative indication of the most relevant characteristics of such data for obtaining an HTR model able to effectively transcribe manuscripts in small collections with as little as five real fine-tuning lines.




Handwritten Text Generation from Visual Archetypes

AUTHORS: R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://ceur-ws.org/Vol-3536/03_paper.pdf

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: HTG, Text Generation, Characters, Visual Archetypes, Transformer, Calligraphic, GANs, Encoding, Training, Synthetic

Abstract
Generating synthetic images of handwritten text in a writer-specific style is a challenging task, especially in the case of unseen styles and new words, and even more when these latter contain characters that are rarely encountered during training. While emulating a writer’s style has been recently addressed by generative models, the generalization towards rare characters has been disregarded. In this work, we devise a Transformer-based model for Few-Shot styled handwritten text generation and focus on obtaining a robust and informative representation of both the text and the style. In particular, we propose a novel representation of the textual content as a sequence of dense vectors obtained from images of symbols written as standard GNU Unifont glyphs, which can be considered their visual archetypes. This strategy is more suitable for generating characters that, despite having been seen rarely during training, possibly share visual details with the frequently observed ones. As for the style, we obtain a robust representation of unseen writers’ calligraphy by exploiting specific pre-training on a large synthetic dataset. Quantitative and qualitative results demonstrate the effectiveness of our proposal in generating words in unseen styles and with rare characters more faithfully than existing approaches relying on independent one-hot encodings of the characters.




Bridging Islamic Knowledge and AI: Inquiring ChatGPT on Possible Categorizations for an Islamic Digital Library (full paper)

AUTHORS: A. El Ganadi, R. A. Vigliermo, L. Sala, M. Vanzini, F. Ruozzi, F. Ruozzi, S. Bergamaschi

URL: https://ceur-ws.org/Vol-3536/03_paper.pdf

Work Package : WP5

Keywords: Libraries and Archives in CH, Digital Libraries and Religious Archives, ChatGPT, Islamic studies, Arabic script languages, Islamic knowledge classification, Islamic subjects

Abstract
This research evaluates the capabilities of ChatGPT in assisting with the categorization of an Islamic digital library exploiting incremental Machine Learning and Transfer Learning techniques. Noticeably, ChatGPT showcased a remarkable familiarity with Islamic knowledge, evident in its ability to classify subjects hierarchically based on their importance, from Qur’anic Studies to Modern Islamic Thought. The library aimed to cater to a diverse Arabic Islamic audience with collections sourced from varied digital donations. Despite ChatGPT’s commendable proficiency, challenges arose. In light of ChatGPT’s significant performance, several challenges arose, with interpretability, generalization, and the hallucination issue standing out as the most critical obstacles.




Knowledge extraction, management and long-term preservation of non-Latin cultural heritages-Digital Maktaba project presentation

AUTHORS: S. Bergamaschi, R. Martoglia, F. Ruozzi, R. A. Vigliermo, L. Sala, M. Vanzini

URL: https://ceur-ws.org/Vol-3365/short11.pdf

Work Package : WP5

Keywords: Cultural heritages, Non-Latin alphabets, Knowledge extraction, Machine Learning, Natural Language Processing, Big data management, Long-term preservation, Big data integration, Named Entity Recognition

Abstract
The services provided by today’s cutting-edge digital library systems may benefit from new technologies that can improve cataloguing efficiency and cultural heritages preservation and accessibility. Below, we introduce the recently started Digital Maktaba (DM) project, which suggests a new model for the knowledge extraction and semi-automatic cataloguing task in the context of digital libraries that contain documents in non-Latin scripts (e.g. Arabic). Since DM involves a large amount of unorganized data from several sources, particular emphasis will be placed on topics such as big data integration, big data analysis and long-term preservation. This project aims to create an innovative workflow for the automatic extraction of information and metadata and for a semi-automated cataloguing process by exploiting Machine Learning, Natural Language Processing, Artificial Intelligence and data management techniques to provide a system that is capable of speeding up, enhancing and supporting the librarian’s work. We also report on some promising results that we obtained through a preliminary proof of concept experimentation. (Short paper, discussion paper)




Knowledge Extraction and Cross-Language Data Integration in Digital Libraries

AUTHORS: L. Sala

URL: https://ceur-ws.org/Vol-3478/paper17.pdf

Work Package : WP5

Keywords: Data Integration, Cross-Language Record Linkage, Knowledge Extraction, Long-term Preservation

Abstract
Digital Humanities (DH) is an interdisciplinary field that has grown rapidly in recent years, requiring the creation of an efficient and uniform platform capable of managing various types of data in several languages. This paper presents the research objectives and methodologies of my PhD project: the creation of a novel framework for Knowledge Extraction and Multilingual Data Integration in the context of digital libraries in non-Latin languages, in particular Arabic, Persian and Azerbaijani. The research began with the Digital Maktaba (DM) project and continued within the PNRR ITSERR infrastructure, in which the DBGroup1 participates. The project aims to develop a two-component framework consisting of a Knowledge Extraction Subsystem and a Data Integration Subsystem. The case study is based on the DM project, which seeks to create a flexible and efficient digital library for preserving and analyzing multicultural heritage documents by exploiting the available and ad-hoc created datasets, Explainable Machine Learning , Natural Language Processing (NLP) technologies and Data Integration approaches. Key challenges and future developments in Knowledge Extraction and Data Integration are examined, which involve leveraging the MOMIS system for Data Integration tasks and adopting a microservices-based architecture for the effective implementation of the system. The goal is to provide a versatile platform for organizing and integrating various data sources and languages, thereby fostering a more inclusive and accessible global perspective on cultural and historical artefacts that encourage collaboration in building an expanding knowledge base.