Intertestualità tra Bibbie e antichi commentari cristiani: l’esempio di simul nel De Genesi ad litteram di Agostino

AUTHORS: ANNA MAMBELLI; DAVIDE DAINESE

WORK PACKAGE: WP 8 – uBIQUity

URL: https://lexicon.cnr.it/ojs/index.php/LP/article/view/872

Keywords: Intertextuality; Biblical Quotations; Augustine; De Genesi ad litteram; Genesis (OT
Book); Patristic Exegesis

Abstract
This contribution presents a case study that, on the basis of some occurrences of the adverb simul in Augustine’s De Genesi ad litteram, allows us to illustrate the classification system we adopt to map the intertextual relationships between known Greek and Latin versions of the Bible and some patristic texts. This taxonomy has been set up within the framework of two research projects, joint together within European research infrastructure for Religious Studies “Resilience-RI”. After a methodological introduction based on the state of the art, the workflow will be explained and finally the concrete example of the adverb simul will be shown focusing on the use of some passages from Genesis 1 and Sirach 18:1 in Augustine’s commentary.




Digital Dark Ages: The Role of Medieval Corpora in the Context of the Digital Humanities and Religious Studies

AUTHORS: LAURA RIGHI

WORK PACKAGE: WP 7 – REVER

URL: https://www.rivisteweb.it/doi/10.17395/112876

KEYWORDS: Middle Ages, Digital Humanities, Religious Studies

Abstract
In recent years, the debate on the role and methodologies of the digital humanities has seen considerable development, including in the specific – but disciplinarily vast – domain of Religious Studies. Even if it is a recent debate, its premises are based on epistemological questions and assumptions whose history it’s important to outline. In this context, a great contribution could be provided by the research conducted on medieval textual corpora. Through the study of some cases, starting from Roberto Busas’ Index Thomisticus up to ongoing research projects, this contribution presents some trends and specificities of the analysis and publication of medieval sources in the digital environment. Aiming at discussing innovations and limits of this research field, and what can be its contribution to the ongoing debate on digital religious studies.




Moving beyond the Content: 3D Scanning and Post-Processing Analysis of the Cuneiform Tablets of the Turin Collection

AUTHORS: FILIPPO DIARA; FRANCESCO GIUSEPPE BARSACCHI; STEFANO DE MARTINO

URL: https://www.mdpi.com/2076-3417/14/11/4492

WORK PACKAGE: WP 9 – TAURUS

KEY WORDS: 3D scanning; cuneiform tablets; digital imaging; fingerprints; MSII; sealings

Abstract

This work and manuscript focus on how 3D scanning methodologies and post-processing analyses may help us to gain a deeper investigation of cuneiform tablets beyond the written content. The dataset proposed herein is a key part of the archaeological collection preserved in the Musei Reali of Turin in Italy; these archaeological artefacts enclose further important semantic information extractable through detailed 3D documentation and 3D model filtering. In fact, this scanning process is a fundamental tool for better reading of sealing impressions beneath the cuneiform text, as well as for understanding micrometric evidence of the fingerprints of scribes. Most of the seal impressions were made before the writing (like a watermark), and thus, they are not detectable to the naked eye due to cuneiform signs above them as well as the state of preservation. In this regard, 3D scanning and post-processing analysis could help in the analysis of these nearly invisible features impressed on tablets. For this reason, this work is also based on how 3D analyses may support the identification of the unperceived and almost invisible features concealed in clay tablets. Analysis of fingerprints and the depths of the signs can tell us about the worker’s strategies and the people beyond the artefacts. Three-dimensional models generated inside the Artec 3D ecosystem via Space Spider scanner and Artec Studio software were further investigated by applying specific filters and shaders. Digital light manipulation can reveal, through the dynamic displacement of light and shadows, particular details that can be deeply analysed with specific post-processing operations: for example, the MSII (multi-scale integral invariant) filter is a powerful tool exploited for revealing hidden and unperceived features such as fingerprints and sealing impressions (stratigraphically below cuneiform signs). Finally, the collected data will be handled twofold: in an open-access repository and through a common data environment (CDE) to aid in the data exchange process for project collaborators and common users.




Isometric Words and Edit Distance: Main Notions and New Variations

AUTHORS: G.Castiglione, M. Flores, D. Giammarresi

URL: https://link.springer.com/chapter/10.1007/978-3-031-42250-8_1

Work Package : Work Package 7 – REVER

Keywords: Isometric words, Edit distance, Generalized Fibonacci cubes

Abstract
Isometric words combine the notion of edit distance together with properties of words not appearing as factors in other words. An edit distance is a metric between words that quantifies how two words differ by counting the number of edit operations needed to transform one word into the other one. A word f is said isometric with respect to an edit distance if, for any pair of f-free words u and v, there exists a transformation of minimal length from u into v via the related edit operations such that all the intermediate words are also f-free. The adjective “isometric” comes from the fact that, if the Hamming distance is considered (i.e., only replacement operations are used), then isometric words are connected with the definitions of isometric subgraphs of hypercubes. We discuss known results and some interesting generalizations and open problems.




Hypercubes and IsometricWords Based on Swap and Mismatch Distance

AUTHORS: M. Anselmo, G.Castiglione, M. Flores, D. Giammarresi, M. Madonia, S. Mantaci

URL: https://link.springer.com/chapter/10.1007/978-3-031-34326-1_2

Work Package : Work Package 7 – REVER

Keywords: Swap and mismatch distance, Isometric words, Hypercube

Abstract
The hypercube of dimension n is the graph whose vertices are the 2nbinary words of length n, and there is an edge between two of them if they have Hamming distance 1. We consider an edit distance based on swaps and mismatches, to which we refer as tilde-distance, and define the tilde-hypercube with edges linking words at tilde-distance 1. Then, we introduce and study some isometric subgraphs of the tilde-hypercube obtained by using special words called tilde-isometric words. The subgraphs keep only the vertices that avoid a given tilde-isometric word as a factor. An infinite family of tilde-isometric words is described; they are isometric with respect to the tilde-distance, but not to the Hamming distance. In the case of word 11, the subgraph is called tilde-Fibonacci cube, as a generalization of the classical Fibonacci cube. The tilde-hypercube and the tilde-Fibonacci cube can be recursively defined; the same holds for the number of their edges. This allows an asymptotic estimation of the number of edges in the tilde-Fibonacci cube, in comparison to the total number in the tilde-hypercube.




IsometricWords Based on Swap and Mismatch Distance

AUTHORS: M. Anselmo, G.Castiglione, M. Flores, D. Giammarresi, M. Madonia, S. Mantaci

URL: https://link.springer.com/chapter/10.1007/978-3-031-33264-7_3

Work Package : Work Package 7 – REVER

Keywords: Swap and mismatch distance, Isometric words, Overlap with errors

Abstract
An edit distance is a metric between words that quantifies how two words differ by counting the number of edit operations needed to transform one word into the other one. A word f is said isometric with respect to an edit distance if, for any pair of f-free words u and v, there exists a transformation of minimal length from u to v via the related edit operations such that all the intermediate words are also f-free. The adjective “isometric” comes from the fact that, if the Hamming distance is considered (i.e., only mismatches), then isometric words define some isometric subgraphs of hypercubes. We consider the case of edit distance with swap and mismatch. We compare it with the case of mismatch only and prove some properties of isometric words that are related to particular features of their overlaps.




Measuring fairness under unawareness of sensitive attributes: A quantification-based approach

AUTHORS: A. Fabris, A. Esuli, A. Moreo, F. Sebastiani

URL: https://doi.org/10.1613/jair.1.14033

Work Package : All ITSERR WPs using FAIR data

Keywords: Algorithms, Models, Decision Making, Group Fairness, Demographic Attributes, Data Minimisation, Privacy, Fairness Measurement, Sensitive Attributes, Quantification, Supervised Learning, Prevalence Estimates, Distribution Shifts, Demographic Parity, Classifier Fairness

Abstract
Algorithms and models are increasingly deployed to inform decisions about people, inevitably affecting their lives. As a consequence, those in charge of developing these models must carefully evaluate their impact on different groups of people and favour group fairness, that is, ensure that groups determined by sensitive demographic attributes, such as race or sex, are not treated unjustly. To achieve this goal, the availability (awareness) of these demographic attributes to those evaluating the impact of these models is fundamental. Unfortunately, collecting and storing these attributes is often in conflict with industry practices and legislation on data minimisation and privacy. For this reason, it can be hard to measure the group fairness of trained models, even from within the companies developing them. In this work, we tackle the problem of measuring group fairness under unawareness of sensitive attributes, by using techniques from quantification, a supervised learning task concerned with directly providing group-level prevalence estimates (rather than individual-level class labels). We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem, as they are robust to inevitable distribution shifts while at the same time decoupling the (desirable) objective of measuring group fairness from the (undesirable) side effect of allowing the inference of sensitive attributes of individuals. More in detail, we show that fairness under unawareness can be cast as a quantification problem and solved with proven methods from the quantification literature. We show that these methods outperform previous approaches to measure demographic parity in five experimental protocols, corresponding to important challenges that complicate the estimation of classifier fairness under unawareness.




Volumetric Fast Fourier Convolution for Detecting Ink on the Carbonized Herculaneum Papyri

AUTHORS: Fabio Quattrini, R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://openaccess.thecvf.com/content/ICCV2023W/e-Heritage/papers/Quattrini_Volumetric_Fast_Fourier_Convolution_for_Detecting_Ink_on_the_Carbonized_ICCVW_2023_paper.pdf

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: Digital Document Restoration, Virtual Unwrapping, Herculaneum Papyri, Ink Detection, Computer Vision, X-ray Micro-Computed Tomography, Artificial Intelligence, Volumetric Data, Fast Fourier Convolutions, Carbon-based Ink

Abstract
Recent advancements in Digital Document Restoration (DDR) have led to significant breakthroughs in analyzing highly damaged written artifacts. Among those, there has been an increasing interest in applying Artificial Intelligence techniques for virtually unwrapping and automatically detecting ink on the Herculaneum papyri collection. This collection consists of carbonized scrolls and fragments of documents, which have been digitized via X-ray tomography to allow the development of ad-hoc deep learningbased DDR solutions. In this work, we propose a modification of the Fast Fourier Convolution operator for volumetric data and apply it in a segmentation architecture for ink detection on the challenging Herculaneum papyri, demonstrating its suitability via deep experimental analysis. To encourage the research on this task and the application of the proposed operator to other tasks involving volumetric data, we will release our implementation (https://github.com/aimagelab/vffc).




How to Choose Pretrained Handwriting Recognition Models for Single Writer Fine-Tuning

AUTHORS: R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://link.springer.com/chapter/10.1007/978-3-031-41679-8_19

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: Document synthesis, Historical document analysis, Handwriting recognition, Synthetic data

Abstract
Recent advancements in Deep Learning-based Handwritten Text Recognition (HTR) have led to models with remarkable performance on both modern and historical manuscripts in large benchmark datasets. Nonetheless, those models struggle to obtain the same performance when applied to manuscripts with peculiar characteristics, such as language, paper support, ink, and author handwriting. This issue is very relevant for valuable but small collections of documents preserved in historical archives, for which obtaining sufficient annotated training data is costly or, in some cases, unfeasible. To overcome this challenge, a possible solution is to pretrain HTR models on large datasets and then fine-tune them on small single-author collections. In this paper, we take into account large, real benchmark datasets and synthetic ones obtained with a styled Handwritten Text Generation model. Through extensive experimental analysis, also considering the amount of fine-tuning lines, we give a quantitative indication of the most relevant characteristics of such data for obtaining an HTR model able to effectively transcribe manuscripts in small collections with as little as five real fine-tuning lines.




Handwritten Text Generation from Visual Archetypes

AUTHORS: R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://ceur-ws.org/Vol-3536/03_paper.pdf

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: HTG, Text Generation, Characters, Visual Archetypes, Transformer, Calligraphic, GANs, Encoding, Training, Synthetic

Abstract
Generating synthetic images of handwritten text in a writer-specific style is a challenging task, especially in the case of unseen styles and new words, and even more when these latter contain characters that are rarely encountered during training. While emulating a writer’s style has been recently addressed by generative models, the generalization towards rare characters has been disregarded. In this work, we devise a Transformer-based model for Few-Shot styled handwritten text generation and focus on obtaining a robust and informative representation of both the text and the style. In particular, we propose a novel representation of the textual content as a sequence of dense vectors obtained from images of symbols written as standard GNU Unifont glyphs, which can be considered their visual archetypes. This strategy is more suitable for generating characters that, despite having been seen rarely during training, possibly share visual details with the frequently observed ones. As for the style, we obtain a robust representation of unseen writers’ calligraphy by exploiting specific pre-training on a large synthetic dataset. Quantitative and qualitative results demonstrate the effectiveness of our proposal in generating words in unseen styles and with rare characters more faithfully than existing approaches relying on independent one-hot encodings of the characters.