It is our great pleasure to announce the 2024 Joint Chapter Meeting of CAA Netherlands/Flanders and CAA Germany which will be held in Groningen, The Netherlands, October 24th to 25th, 2024.
Registration
Registration is now open! Please go to this form to register. Payment is done by bank transfer, please transfer your fee (15 or 25 euros, see Fees below) to the following IBAN:
The name on the account is S. Arnoldussen, please mention “JCM 2024” in the payment note, and your name if paying from an account that isn’t in your name.
Your registration is only valid if accompanied by a bank transfer. Payment at the registration desk is unfortunately not possible.
Fees
Standard fee: €25
Reduced student fee: €15
The registration fee includes admission to the conference and coffee/tea. Lunch and dinner is not included.
Timetable
- Registration deadline: 20 October
- Conference: 24-25 October
- 24th of October:
- 13.00 – 17.30: Presentations
- 18.00 – end: optional dinner (not included in registration)
- 25th of October:
- 09.30 – 16.00: Presentations
- 24th of October:
Programme
The full programme can be found below. Abstracts for the papers can be found in the last section on this page.
Thursday 24 October
12:30 – 12:50 Registration
12:50 – 13:00 Welcome, introduction and practicalities
13:00 – 15:00 3D Technologies
- Jitte Waagen – Virtual Past Places: VR for education, results of the SURF incentive scheme project at UvA
- Costas Papadopoulos & Alicia Walsh – Revisiting Scholarly Publication: Evaluation and Peer Review of 3D Scholarship
- Alexis Maldonado Ruiz – Digitisation and 3D Printing to improve accessibility of archaeological heritage for visually impaired people
- Lutz Schubert – Modelling the Maya Landscape
15:00 – 15:30 Coffee break
15:30 – 17:30 Open Science & Networks
- Ronald Visser – Developing the R package dendroNetwork for dendro-archaeology: lessons learned
- Bjørn Peare Bartholdy – RchaeoStats: Open, online teaching materials for learning statistics with R
- Dries Daems – Presenting NASSA: Network for Agent-Based Modelling of Socio-Ecological Systems in Archaeology
- Erik Kroon – Beyond Pots, Potters, and Peoples: Using Network Analysis, Probability Theory and Ceramic Chaînes Opératoires to Shed New Light on the Corded Ware Transition
18:00 Social dinner*
Friday 25 October
09:00 – 09:30 Registration
09:30 – 11:00 Remote Sensing & Predictive Modeling
- Agnes Schneider – From FAIR practices to FAIR principles in archaeological remote sensing: where are we?
- Simon Seyfried – Computing in Agricultural Remote Sensing for Aerial Archaeology
- Afke van Zijverden – Preserving Prehistory for the Future; Predictive modelling for protecting submerged and coastal Mesolithic sites.
11:00 – 11:30 Coffee break
11:30 – 13:00 Statistics & Databases
- Roderick Geerts – From Pottery to Patterns. Correspondence Analysis of large Roman period datasets
- Eduardo Herrera Malatesta – A Framework for Quantifying Uncertainties in Spatial Analysis
- Mark R. Groenhuijzen – Constructing the limes site database: building a cross-boundary site database to study border systems
13:00 – 14:00 Lunch Break*
14:00 – 15:30 Artificial Intelligence
- Karsten Lambers – Multimodal AI for the Analysis of Historical Maps
- Lieven Verdonck – Interactive, machine-learning based semantic segmentation of geophysical data
- Alex Brandsen – OpenAI vs. Open Source – Evaluating the Performance of Large Language Models on Extracting Radiocarbon Dates from Archaeology Papers
15:30 – 16:00 Q&A, conclusions, and goodbye
Social Dinner
We will organise a social dinner at 18.00 on the first day, more detailed information to follow by email soon. Please note that the costs of the social dinner are not included in the registration fee. Each participant will be required to cover their own expenses.
Venue
The JCM will be held in the centre of Groningen, hosted by the University of Groningen. The address is Broerstraat 9, Groningen, Rooms A901 and A902.
Organising Committee
In alphabetical order:
- Stijn Arnoldussen, Groningen University
- Alex Brandsen, Leiden University
- Kai-Christian Bruhn, University of Applied Sciences Mainz
- Pir Hoebe, Groningen University
- Jürgen Landauer, Independent Researcher
- Manuela Ritondale, Groningen University
- Agnes Schneider, Leiden University
- Lutz Schubert, University of Ulm
- Devi Taelman, Vrije Universiteit Brussel / Ghent University
- Daniella Vos, Groningen University
- Ronald Visser, Saxion University of Applied Sciences
- Alicia Walsh, University of Amsterdam
Abstracts
Virtual Past Places: VR for education, results of the SURF incentive scheme project at UvA
Jitte Waagen (University of Amsterdam)
Virtual Reality as a learning environment has seen various experiments in the field of archaeological education. Whereas the advantage of allowing access to remote sites, reconstructed worlds, or inaccessible places appears obvious, there are also prospections of increased learning outcomes. These are then expected from an increased spatial awareness, the stimulation of an active learning attitude, a more comprehensive structuring of learning materials, and so on and so forth. Although in general the often qualitative and small-scale evaluations are on average positive, more systematic evaluations are rare. In the context of the Virtual Past Places project at the University of Amsterdam, executed by the 4D Research Lab, a systematic evaluation programme has been developed and executed for tailor-made Virtual Reality environments for different courses in Humanities disciplines. In total, six environments were created for teaching in Archaeology, Ancient Studies and Art History, around which questionnaires, tests with control groups and focus group discussions have been organized. The aim has been to generate a solid empirical basis for such claims, and to advance good practice for embedding Virtual Reality as learning environments in Humanities education, and Archeology specifically. In this presentation, the tentative outcomes of the Virtual Past Places project will be shared and discussed.
Revisiting Scholarly Publication: Evaluation and Peer Review of 3D Scholarship
Costas Papadopoulos, Alicia Walsh (Maastricht University)
Journals and publishers have been experimenting with conceptual and technical systems for the publication and peer review of multimodal interactive scholarship (e.g., Journal of Digital History, Melusina Press, Stanford Digital Projects, Manifold, Digital Scholar, Quarto). Such formats challenge the scholarly publication ecosystem in their FAIRness, recognition, and sustainability (Maryl et al. 2023). One such type of interactive scholarship is 3D visualisations, which have been widely used in the fields of heritage and archaeology, with several infrastructures available for improving accessibility to these models through online sharing. However, The challenges of publishing 3D outputs as autonomous works of scholarship that are recognised and rewarded similarly to more traditional paradigms have been insufficiently addressed. PURE3D (funded by Platform Digitale Infrastructuur – Social Sciences & Humanities) is an infrastructure designed to enable researchers to publish their 3D models alongside scholarly arguments in the form of 3D Scholarly Editions (Schreibman and Papadopoulos, 2019; Papadopoulos and Schreibman, 2019). Through the use of the Voyager platform, created by the Smithsonian, a space is provided for storytelling and meaningful associations with 3D visualisations, offering functionalities that surpass those of other 3D web viewers, as it contextualises research data with the 3D model at the center of the scholarly argument. The transparency and quality of this research is enhanced through an integrated peer-review process, critical for validating and refining these research outputs.
To consider the conceptual and technological complexities that the evaluation and publication of such scholarship brings, PURE3D – through the NWO Open Science grant for OPER3D – drew on a mixed- methods approach consisting of surveys, interviews, focus groups and use case analyses from those working with 3D-based research. This paper discusses the evaluation of the 3D Scholarly Editions by outlining best practices and a framework for 3D peer review.
References
Maryl, M., Błaszczyńska, M., Bonincontro, I., Immenhauser, B., Maróthy, S. Wandl-Vogt, E., van Zundert, J.J., & ALLEA Working Group E-Humanities (2023). Recognising Digital Scholarly Outputs in the Humanities – ALLEA Report. ALLEA | All European Academies. Berlin. DOI 10.26356/OUTPUTS-DH
Papadopoulos, C. and Schreibman, S (2019). Towards 3D Scholarly Editions: The Battle of Mount Street Bridge. Digital Humanities Quarterly, 13(1). http://www.digitalhumanities.org/dhq/vol/13/1/000415/000415.html
Schreibman, S., and Papadopoulos, C. (2019). Textuality in 3D: three-dimensional (re)constructions as digital scholarly editions. International Journal of Digital Humanities, 1, 221–233. https://doi.org/10.1007/s42803-019-00024-6
Digitisation and 3D Printing to improve accessibility of archaeological heritage for visually impaired people
Alexis Maldonado Ruiz (University of Santiago de Compostela / Leiden University)
Heritage education for the non-specialist public should be one of the main reasons why archaeological knowledge is generated from the academic framework. For this to happen, heritage needs to be made accessible to all audiences. However, despite the technological developments of recent years, accessibility is still an unresolved issue in the heritage sector. As a result, users with some form of functional diversity cannot fully access a cultural resource that should be universal.
Within this diverse group, visually impaired people are among the most affected. However, in a society dominated by an intangible digital framework, haptic perception remains a necessary tool. Although Typhlology and Typhlo-technology have been trying for years to remove or mitigate the main barriers or obstacles that blind and partially sighted people face, access is still limited.
In this sense, the incorporation and democratisation of digitisation technologies, 3D modelling and 3D printing is a powerful revolution in bringing historical and archaeological heritage closer to all citizens. The right combination of these tools makes it possible to produce replicas that are accurate, fast and economical.
Modelling the Maya Landscape
Lutz Schubert (Universität zu Köln), Thomas Guderjan (University of Tyler at Texas), Daria Stefan, (TU Wien)
Over recent years, our understanding of and view on the Mesoamerican archaeological landscape has changed substantially. From our original conception of a native jungle with sparse, but vast cities, to an overpopulated, highly managed landscape with cities and villages, and major farmlands in-between. The impact of this change cannot be underestimated, as it transforms the archaeological landscape significantly:
In a wild, uncontrolled landscape predominated by jungle overgrowth, monumental buildings would be hidden for the majority of the population. Sustenance would primarily rely on hunting and gathering for the majority of the population, with agricultural produce being reserved for the elite. Yucatan is constituted of microclimate areas that serve different produce better, from maize over squash to cocoa, which means that no single polity can provide all necessary goods for its region, but instead has to rely on other regions.
Even if polities would collaborate on some level, e.g. for managing farmland, they would still be separated by a wild landscape that would make any immediate control difficult. Instead, polities ruling over different areas will more likely have traded exotic goods and foodstuff on regulated markets across regions. This necessitates a complex network of communication and transportation. Basing on recent LIDAR scans in Belize, we investigate different means to prototype the Maya landscape under different assumptions, from overgrown jungle to strongly maintained farmland.
In our approach, we use a mix of procedural and static generation of different building types on basis of the LIDAR data. We examine different ways of identifying building types from LIDAR, using a mix of AI support and knowledge about how Maya space is organised. This allows us to generate quick visual impressions of how the Maya landscape could have been experienced and thus how its experience may have influenced visual style, symbolism and organisation.
Developing the R package dendroNetwork for dendro-archaeology: lessons learned
Ronald Visser (Saxion University of Applied Sciences)
Open Science in Archaeology and beyond is growing in the recent years. More and more code related to publications has been made available over the years (e.g. https://github.com/benmarwick/ctv-archaeology). I have also published papers (e.g. Visser 2021; Visser and Vorst 2022b) on network analyses of dendrochronological data from archaeological contexts in order to get an estimate of the provenance of wood used in the Roman period. We have found that the military timber supply was possibly different than the civilian supply and the wood for Roman barges was sometimes moved over long distances and combined with local wood. The related code to these papers was also published (e.g. Visser 2022; Visser and Vorst 2022a). Recently, I combined the code of those papers into a package for R (Visser 2024). Packages have many advantages, such as ease of use, enabling reproducibility and improving transparency. While creating this package (https://docs.ropensci.org/dendroNetwork/), I learned that I probably should have created a package in the first place (see also https://ropensci.org/blog/2024/06/06/from-scripts-to-package/). In this presentation, I will explain what lessons I learned in this process in relation to developing, documentation, continuous integration, packaging, and how open peer review through rOpenSci has benefited the development of a package. This will hopefully enable others to more easily contribute to Open Science in archaeology.
References
Visser, RM. 2021 Dendrochronological Provenance Patterns. Network Analysis of Tree-Ring Material Reveals Spatial and Economic Relations of Roman Timber in the Continental North-Western Provinces. Journal of Computer Applications in Archaeology 4(1): 230–253. DOI: https://doi.org/10.5334/jcaa.79.
Visser, RM. 2022 Dendrochronological Provenance Patterns. Code and Data of Network Analysis of Tree-Ring Material. DOI: https://doi.org/10.5281/zenodo.10200361.
Visser, RM. 2024 dendroNetwork: a R-package to create dendrochronological networks. DOI: https://doi.org/10.5281/zenodo.10636310.
Visser, RM and Vorst, Y. 2022a Analyses, data and figures related to: ‘Connecting ships: using dendrochronological network analysis to determine the wood provenance of Roman-period river barges found in the Lower Rhine region and to visualise patterns of wood use’. DOI: https://doi.org/10.5281/zenodo.7243539.
Visser, RM and Vorst, Y. 2022b Connecting Ships: Using Dendrochronological Network Analysis to Determine the Wood Provenance of Roman-Period River Barges Found in the Lower Rhine Region and Visualise Wood Use Patterns. International Journal of Wood Culture 3(1–3): 123–151. DOI: https://doi.org/10.1163/27723194-bja10014.
RchaeoStats: Open, online teaching materials for learning statistics with R
Bjørn Peare Bartholdy (Leiden University)
In this talk I introduce RchaeoStats, openly available teaching materials on statistics and the R programming language, primarily aimed at archaeologists with little to no background in one or both of these topics. The materials address a wide variety of topics to make the R programming language more accessible to archaeologists, starting from the basics of R, to more advanced topics such as data visualisation and communicating results with Quarto. The materials are designed to be modular, allowing workshops to be customised both in terms of duration and content. The core module will provide an introduction to R and a recommended workflow and organisation of an R project in RStudio. Additional modules can be added to the core module to address the needs of researchers following the materials, whether self-paced or attending a workshop.
The primary goal is to make researchers more comfortable performing data analysis in R, rather than teaching them to become a programmer. Other outcomes include increased computer literacy, promoting the adoption of open source alternatives to proprietary software, and increasing the reproducibility of archaeological research; both directly with a reproducible research module, and indirectly by encouraging the use of a scripting language. Contributions will be welcomed from the community, especially for developing archaeology-specific modules such as radiocarbon dating, dendrochronology, skeletal sex and age-at-death estimation, and more. The materials will be made available with a permissive license to maximise reuse and modification of the materials.
Presenting NASSA: Network for Agent-Based Modelling of Socio-Ecological Systems in Archaeology
Dries Daems (Vrije Universiteit Amsterdam)
Archaeologists are increasingly relying on computer simulations to reconstruct and understand past societies. They are successfully building and running simulations of agrarian production, trade, settlement development and movement, to name a few. The current state of the field, however, is still characterized by idiosyncratic efforts and limited communication and integration throughout the community, hampering the ability of modelers to cumulatively build on each other’s work. This is predominantly due to the lack of appropriate tools and platforms enabling closer integration.
To remedy this situation, the NASSA project (Network for Agent-based modelling of Socio-ecological Systems in Archaeology; https://archaeology-abm.github.io/NASA/) is developing an open library of model algorithms and code for modelling of socio-ecological systems in archaeology. It aims to redefine current practices in collaboration and synergy in modelling communities by developing an openly available and functional models library, offering a host of elements (modules, techniques, algorithms, how-to’s/wikis etc.) as modular building blocks for elaborate and case-driven models and research questions.
In this talk, we will present the first results of the project towards developing the necessary infrastructure and standards for the models library and our efforts towards creating a community of users within the field of archaeological modelling. Through our work, we aim to make simulation modelling accessible to a wider community of archaeologists.
Beyond Pots, Potters, and Peoples: Using Network Analysis, Probability Theory and Ceramic Chaînes Opératoires to Shed New Light on the Corded Ware Transition
Erik Kroon (Leiden University)
5,000 years ago, a migration fundamentally changed Europe. Migrating communities spread across Europe within two centuries, leaving lasting changes in connectivity, language, and genetics. Yet these migrating communities did not enter an empty continent. Across Europe, they encountered indigenous communities with millennia-old roots. What interactions between migrating and indigenous communities resulted in the changes seen in the archaeological record?
This paper sheds new light on this question with an innovative, computational approach to ceramics. Modern migration studies show learning and knowledge exchange are pivotal to the integration (or lack thereof) of migrants in host societies. Ceramics are ideal to study such learning processes in prehistory, because they preserve well in the archaeological record and bear traces of the production techniques which potters learned and applied to fashion them. The method outlined here combines the chaîne opératoire approach, network analysis, and probability theory to quantitatively assess the amount of shared knowledge between potters from these traces.
This approach is applied to ceramic assemblages from migrating Corded Ware and indigenous Funnel Beaker West communities in the Netherlands. The results demonstrate migrating communities repeatedly learned from the indigenous communities they encountered across Europe and applied this knowledge to produce Corded Ware vessels. This finding is a crucial complement to archaeogenetic studies of this period and underlines the importance of looking at learning for understanding migrations during the Stone Age, and beyond.
From FAIR practices to FAIR principles in archaeological remote sensing: where are we?
Agnes Schneider (Leiden University)
Large-scale data collection is an established practice in Archaeological Remote Sensing. Air- and space-borne sensors deliver an ever-growing proportion of data sets, and in the last decade geophysical prospection has started to collect data on a large scale as well. Evaluating the increasing amount and variety of sources requires analytical methods to handle this data deluge, such as (semi-) automated analysis methods. With regard to air- and space-borne imagery, well-developed workflows and established approaches already exist. At the same time, geophysical data sets lack these workflows because of the different character and complexity and of these data sets.
The nearly two decades in which semi-automated methods have been applied to large-scale archaeological remote sensing data show a constant development of methods (loosely relying on developments in Remote Sensing and Computer Science). During this a wide variety of solutions have been developed for different questions with recurring patterns, rarely with published data, code or workflow (latter often in a very generalized way) which would facilitate transferability or even reproducibility.
FAIR principles were introduced in 2014, and recently research data management has also become more and more of a central focus in scientific work. Still, there is a lack of broadly used research standards on different levels, which leads to the lack of transparency, transferability and best practices. With initiatives on different levels, this landscape is starting to change and evolve. Together, we have to create common community standards and best practices that lead to FAIR and Open Data. On top of that, reproducible research practices and addressing sustainability issues have to be discussed and implemented.
The following presentation proposes a set of desiderata which would facilitate FAIR and sustainable work with archaeo-geophysical data.
Computing in Agricultural Remote Sensing for Aerial Archaeology
Simon Seyfried (Independent researcher)
Agricultural remote sensing provides unique opportunities for crop mark-based aerial archaeology. This technology enhances irrigation, fertilization, and harvesting by early detection of stress factors like drought and disease, allowing for timely interventions. It supports sustainable land use and improves efficiency through precise mapping and monitoring. Various data and evaluation methods significantly aid aerial archaeology by analyzing plant growth, stress, and soil conditions. This contribution emphasizes the statistical presentation and contextualization of remote sensing data collected by Earth observation satellites alongside traditional aerial imagery of archaeological sites with crop marks. This approach deepens our understanding of crop marks in relation to plant species, soil type, precipitation, climate, and region. Additionally, this method improves the planning of aerial surveys, enhancing cost and time efficiency. Unlike previous studies that linked crop mark occurrences to soil conditions, this method allows for empirical observations without needing to determine exact causes. Choosing the right time for aerial surveys can lead to discovering previously unknown sites in areas where conditions are typically unfavorable for crop marks. Integrating agricultural remote sensing into aerial archaeology not only enhances the identification and documentation of archaeological sites but also aids in the conservation and protection of cultural heritage within active agricultural landscapes. This intersection of technology and heritage holds significant potential for advancing archaeological research while promoting sustainable practices in agriculture.
Preserving Prehistory for the Future; Predictive modelling for protecting submerged and coastal Mesolithic sites
Afke van Zijverden (APPARATUS Europe)
For the past few years more and more flint artefacts have surfaced on the beaches in northwestern Europe. The reason for this phenomenon is erosion. There are many prehistoric sites that are disappearing due to erosion and making choices on which sites should be prioritised for rescue excavations is difficult. This presentation will outline a study applying the theory of predictive modelling to submerged and coastal prehistoric sites. These models will combine recent research from marine biology on for example changing currents, chemical composition of sea water, marine geological studies. A second important factor is human activity therefore, archaeological research on the effects of present-day human activity on coastal areas and its impact on archaeological sites will be used. Both fields combined will make it possible to be able to predict future erosion of prehistoric sites. In order to test this predictive model a number of case studies will be selected to test its accuracy. These case studies which will consist of submerged and coastal Mesolithic sites in different types of environments. The reason for choosing the case studies is because they comprise sites that have ample data of continued monitoring and the data is and can be collected in a systematic way. This will lead to a database that can predict how specific sites will change over the coming years. The final aim of this study is to create a database that can be used to support in the protection and better monitoring of prehistoric maritime heritage.
From Pottery to Patterns. Correspondence Analysis of large Roman period datasets
Roderick Geerts (ADC ArcheoProjecten / Leiden University)
Development led archaeology in the Netherlands has yielded a lot of archaeological data in a relatively short time period. That data is normally published on site level, as is compulsory and dictated by the site specific research questions, with some synthesis and regional comparisons. However, the dataset is extensive enough to warrant a different broader approach. In my PhD research I have analysed the ceramic data from 30 Roman period settlements in order to get a first understanding of the changes taking place during the Roman period. In order to get to those results several types of analysis were used to compare the data and get to usable results. On a more practical level this analysis posed several challenges. Firstly not all (legacy) data is directly suitable for comparative research. Some data needed to be reformatted in order for the databases to be comparable. The next step included more reformatting and filtering. For the employed Correspondence Analysis (CA) not all data can be used as is. Databases need to be filtered and formatted for the CA. Furthermore, not all site yielded the same quantities of material which caused some discrepancies in the dataset to keep into mind. For this big dataset CA was used to find patterns in the site assemblages on multiple levels. Each of these levels, site level, chronological period and contexts, had its own merits, and that is why all three levels were used. This way both broad patterns and detailed information can be seen. The results and advantages of this analyses will be highlighted in the presentation. This method is in the process of being employed on different ceramic datasets within the Roman Empire already, yielding interesting and new results.
A Framework for Quantifying Uncertainties in Spatial Analysis
Eduardo Herrera Malatesta (Leiden University)
Landscape research in archaeology has greatly benefited from the increasing application of computational methods over the last decades. Spatial statistical methods such as point patterns have been particularly revolutionary. Archaeologists have used point pattern analysis to explore spatial arrangements and relations between ‘points’ (e.g., the locations of artifacts or archaeological sites).
However, the results obtained from these techniques can be greatly affected by the uncertainty coming from the fragmentary nature of archaeological data, their irregular distribution in the landscape, and the working methods used to study them. The quantification of uncertainty in spatial data coming from non-systematic surveys has never been fully addressed, and neither have the challenges of applying spatial statistical methods to study databases with partial evidence. To overcome this challenge, archaeologists have increasingly relied on applying advanced methods from statistics, data science and geography. This comes with the implicit idea that advanced methods from formal sciences will provide robustness to past models. As with uncertainty, robustness must be assessed in relation to the case study, the regional context, and the methods used. These issues are essential when the models from advanced methods are directly used to create narratives about past landscapes. While there is a growing trend of researchers working on improving data and computational models, the quantification of uncertainty is still a challenging field.
This paper presents a framework for uncertainty quantification in archaeological point process modelling. This framework formalises existing best practices for assessing robustness and uncertainty in spatial statistical models, mainly focusing on one commonly used in the discipline, the Pair Correlation Function. This framework allows us to better understand how incomplete data affect a model, quantify the model uncertainties, and assess the robustness of the results achieved with spatial point processes.
Constructing the limes site database: building a cross-boundary site database to study border systems
Mark R. Groenhuijzen (Utrecht University)
The “Constructing the limes”-project is interested in how the Roman border influenced the lives of people along both sides of it; how the frontier zone shaped interaction, trade and exchange, migration, and generally formed a cultural contact area. Taking a perspective across the boundaries of the modern nation states, our research area includes at least the entirety of the Netherlands and Flanders and extends into the neighbouring German states as well. In order to approach archaeological questions on this scale, there is a need for an extensive and reliable archaeological site dataset.
Creating archaeological site inventories in the modern age of archaeology does not stop at plotting points on a map. The site database needs to adhere to FAIR-principles: among other things, the data should be transparent, updatable, easy to query, and associated with their provenance. Our research questions further require our data to be analysable on the find level, taking into account uncertainties surrounding the find and site chronology as well as the uncertainties regarding their interpretation.
Building upon earlier research such as the “Finding the limits of the limes”-project, we are constructing a site database out of the existing wealth of find data in archives such as Archis and PAN in the Netherlands and the CAI in Vlaanderen. This paper will present the methodology used to construct this site database, the current state of the data, its current and future applications within the “Constructing the limes”-project and beyond.
Multimodal AI for the Analysis of Historical Maps
Karsten Lambers, Alex Brandsen, Wouter Verschoof-van der Vaart, Sietze van As, Leila Darabi (Leiden University)
Historical maps are a link between past, present, and future. In 19th century Europe, large-scale efforts led to the first comprehensive and accurate cartographic coverage of the landscape. Using standardised graphical and textual elements, these maps depict pre-industrial landscapes that contain many archaeological/historical elements of land-use, water/resource management, production, etc. not available from other sources. The use of these maps for heritage research and management is often limited since efficient methods for the systematic large-scale analysis of map contents are still lacking despite promising recent efforts.
In this pilot, we are developing automated multimodal (text and image) methods for the semantic analysis of historical maps, which can lead to a variety of applications. As a first step, we have generated a labelled dataset of map content large enough to train a Deep Learning algorithm and to evaluate its performance.
The dataset consists of map sheets from the Topografische Militaire Kaart from 1850 (scale 1:50.000). There are 62 sheets that cover almost the entire Netherlands, which are further subdivided into four submaps. The two submaps used here are sheets 32-3 and 39-2, both located in the provinces of Utrecht and Gelderland. The total area used in the dataset is about 500 km2.
The main outcome of the pilot is a labelled dataset marking recurring object classes relevant for landscape reconstruction on historical maps from the Netherlands (e.g., woodlands and toponyms). This dataset serves as input for training a multimodal classifier that is able to automatically detect further instances of those classes in comparable maps based on a combination of textual and graphical elements. In this paper we present the dataset and discuss the outcome of the trained multimodal classifier.
Interactive, machine-learning based semantic segmentation of geophysical data
Lieven Verdonck (University of Cambridge)
Technological advancements in geophysical measurements lead to growing data volumes, which are processed and visualized by means of powerful computers and software, but interpretation remains a bottleneck. When performed manually, delineation and classification are time-consuming. Methods for (semi-)automatic analysis have been proposed, but have not been used widely in archaeological geophysics until today. They can roughly be divided in ‘hand-engineered’ algorithms, where the user formulates the rules, and machine-learning based techniques, where an algorithm is trained so that it learns to analyse new data without human intervention. Deep convolutional neural networks show promising results, although to unlock their full potential in archaeological geophysics, they rely on large quantities of annotated data, which are not available at the moment.
In recent years, powerful tools for medical image segmentation have been developed, which can partially overcome the limitations of existing methods. They are based on shallow machine learning, need few user annotations, and do not require large programming expertise.
Such algorithms calculate predefined features of the input data (geophysical images). The user provides training data by drawing brush strokes on pixels representing at least two classes (archaeological objects and background). The classifier predicts the probability that a pixel belongs to a certain class, almost in real time. The user iteratively improves the classification by adding more labels. The probability maps are converted into a segmentation by applying a threshold. By merging segmented pixels, objects are created, which can be classified as archaeological structures (e.g. pits, walls, ditches) by training another classifier. Incorrect classifications need to be manually corrected, although the gaining of time is considerable compared to a completely manual interpretation, particularly for 3D data. We show the potential and limitations of these techniques by applying them to several archaeological datasets.
OpenAI vs. Open Source – Evaluating the Performance of Large Language Models on Extracting Radiocarbon Dates from Archaeology Papers
Alex Brandsen, Quentin Bourgeois (Leiden University)
The application of generative Artificial Intelligence (genAI) in archaeology has the potential to automate the extraction and analysis of data from academic texts. This study investigates the performance of ChatGPT compared to open-source large language models (LLMs) in extracting radiocarbon dates from scholarly papers focused on the Neolithic period in the Netherlands. Radiocarbon dating is crucial for establishing chronologies in archaeology, but manually extracting and verifying these dates from a multitude of research papers is both time-consuming and prone to error.
Our research involved evaluating two types of genAI models: OpenAI’s ChatGPT and a suite of open-source LLMs from the Hugging Face ecosystem. The evaluation was based on their ability to accurately identify and extract radiocarbon dates from a curated dataset of peer-reviewed archaeological papers. We defined performance metrics focusing on accuracy, precision, recall, and F1-score to assess the models’ performance on extracting relevant dates.
Call for Papers (now closed!)
With ever increasing ubiquity of digital tools and practices, and applications related to data science in archaeology, the organising committee is expecting a prolific event that critically focuses on the theory and practice of digital and quantitative methods in archaeology. This event will provide a platform for scholars and professionals to exchange innovative ideas on cutting-edge technology and analytical methods related to topics as:
- Digital documentation / techniques
- 3D registration, modelling and reconstruction
- Drone applications
- Digital approaches to maritime and underwater archaeology
- Data Management, Analysis and Visualisation
- Database design and management for archaeological and heritage data
- Big data in archaeology
- Open science and reproducibility
- GIS and spatial data visualisation
- Online data collections and publication
- Mobile GIS and data recording strategies
- AI, maching leaning and statistical methods
- Feature/pattern recognition and classification
- Predictive modelling
- Natural language processing
- Digital disseminatinon
- Archaeology and gaming
- Teaching (digital) archaeology
- Digital documentation / techniques
- 3D registration, modelling and reconstruction
- Drone applications
- Digital approaches to maritime and underwater archaeology
- Modelling and simulating archaeological phenomena
- Predictive modelling
- ABM
- Archaeogaming
- Digital ethics and inclusion
- Little minions (short (10 minutes) talks on little digital helpers)
- Semantic web and linked open data
- Museums and digital archaeology
- Quantitative approaches to archaeology
Abstract guidelines
We welcome proposals for 20-minute papers on any of the above topics, written in English. Proposals related to digital archaeology on a topic not listed involved will also be considered.
Abstracts should be submitted at: https://forms.gle/1Ex77r5EeMRnP6gq5
Abstracts will be reviewed by the organising committee. Abstracts should include name and surname, university, institute or company (if applicable), e-mail, title, and abstract text (max. 300 words).
Deadline for abstract submission is the 15th of September.