ALL WORKSHOPS ARE ALREADY BOOKED OUT!
W4 Hands-on QGIS workshop (CANCELLED)
Especially for early-career researchers, presenting a paper at an international conference like CAA can be very challenging. Effectively communicating a piece of research in an oral presentation, even to interested peers, is never easy. Personal style, selection of subject matter, choice of presentation material, language barrier and differences in academic culture can all stand in the way of getting the message across. And unfortunately, constructive feedback is often very difficult to obtain.
In this workshop, the CAA SC wants to offer presenters at CAA the opportunity to practice their paper and receive targeted feedback on their presentation in a 30 minute session. Participants will first present their papers to each other, and to a small group of experienced presenters. These will then give feedback on the quality of the presentation, and where necessary provide guidelines for improvement, in a supportive atmosphere.
Participation is limited to 16 persons; preference will be given to first-time presenters and young researchers.
Philip Verhagen, Steve Stead
Survey2GIS is a light-weight FOSS tool for use in field documentation and surveying which functions both as a plugin from within gvSIG-CE and also as a standalone program processing surveying data for any preferred desktop GIS. It has been under development since 2011/2012 and available in version 1.4.2 at the time of writing (http://survey-tools.org/index.php/download). Survey2GIS is a fully developed, compact and flexible solution for handling topographic survey data. It processes 2D or 3D point measurements into geometrical objects, including multipart features and polygons with or without holes. Input data consist of one or more survey data files with coded coordinates. The output generated by Survey2GIS is ideal for direct use in GIS. The process can be fully steered by the user, allowing flexible adaptation to individual survey workflows and data structures. Input and output formats can be adapted to fit the requirements and constraints of virtually any project. During its development, high priority has been given to the generation of topologically correct fully attributed output, suitable for analysis in GIS. Survey2gis is highly customizable and includes a number of features designed to boost productivity in the field. Starting from simple input of survey files and production of basic geodata in the Survey2GIS-GUI, this workshop aims to take the participant through the use of filters and switches and other utilities of Survey2GIS to control the output. The final module will deal with the use of Survey2GIS as an iterative command-line application to produce fully structured geodata for any project. The structure of the input-data in combination with the configuration of Survey2GIS, in order to configure the system for individual projects will also be discussed. David Bibby
Say goodbye to the evil ways of SPSS and Excel! Learn how to clean, analyse and plot your data using the fast, reliable and flexible environment of Python!
Analysing vast collections of data is archaeological bread and butter, but all too often this part of one’s research turns into a repetitive and tedious task. Although the point and click data analysis software packages such as Excel or SPSS are widely used, they are not open source, they make it difficult to automate common tasks, they leave few trails of the decisions taken during data manipulation and surprisingly often they lack the key functionality that is needed for a particular research question. As a result, the reliability, efficiency, and reproducibility of research done using these tools is far from perfect.
Analysing your data using scripting languages such as Python (or R) deals with these problems and gives the researcher a much more flexible and reliable tool to tackle their particular research questions. Once a script is developed it can be used to clean, analyse and visualise any dataset removing the need to repeat the same sequence of tasks every time a new piece of data is changed or added. It also represents a lasting, detailed and reliable record of what operations have been performed on the data.
If you need to analyse the data you have been painstakingly collecting for years but you do not know how to start; if you do not want to spent a week copy-pasting data into an excel spreadsheet ever again; if your graphs never look as crisp as you would like them to be or if you are worried that a reviewer may ask how you cleaned your data and you will not be able to answer – this crash course on data analysis in Python is for you!
This course is designed by and for archaeologists so no previous experience of Python or any other scripting language is required. Equally, all examples we will be working with have been developed using real archaeological data and with the most common questions we ask in mind. We will cover:
- importing and exporting data,
- data manipulation (switching between long and wide data, subsetting data),
- cleaning datasets (checking for errors, dealing with missing values),
- running common statistical tests (plus a few pointers to more complex ones),
- creating publication-ready graphs (including some of archaeologists’ favourites such as artefact distributions), and
- automating common tasks.
Please bring your own laptop and download and install the Anaconda distribution of Python 3.2 beforehand: https://www.continuum.io/.
This will be a hands-on introduction to Open Source GIS, specifically focusing on QGIS but also touching on other, related packages, including GRASS. The workshop can be either half or full day, and I will assist participants in loading the QGIS software (which runs on Linux, Windows and Macs). I will provide several tutorials and databases, which will allow the participants to continue the learning process on their own after the workshop. I recommend that people bring their own laptops, so that they can take it all home with them to continue to learn and use. The order of the workshop will be: Introductions, Loading the GIS, tutorials, and data, A brief PowerPoint presentation on the benefits of Open Source geospatial tools for archaeologist, including several archaeological examples, and finally, the hands-on exercises, including vector, raster, database, GPS, web, and archaeology plugins.
I will require an instructional room, with tables and chairs, computer power strips, a projector so that they can see the .ppt and my screen to follow along, and internet access for all participants. If we can have a computer room where we can use desktop computers for participants who do not have laptops it is good, but I need the ability to load QGIS on these computers BEFORE the workshop. I can handle up to 20-25 people, the more the better. Scott Madry
The workshop is split in two parts:
- Hands-on experience with different close range 3D scanning methods
- Analysis and comparison of data using Gigamesh software
For data acquisition participants will be introduced to different scanning methods ranging from budget level, i.e. DAVID scanner, Agisoft, to industrial level, i.e. AICON scanners. Participants will work hands-on with either of the techniques and are invited to provide their own objects for scanning. Results of the methods will be compared and discussed based on real data. We will highlight advantages and disadvantages of all methods presented.
Data analysis will take place with Gigamesh software. Gigamesh is open source software freely available at gigamesh.eu . The software allows basic mesh and point cloud processing as well as some special features for archaeological research like roll-ups or cuneiform tablet analysis,
Hubert Mara, Paul Bayer, Dirk Rieke-Zapp
Multispectral remote sensing data provide useful information of outcropping lithology and surface sediments like the mineral composition as well as physical and chemical properties. These surface features are of interest for various geoarcheological research questions, as they can correlate to archaeological sites and indicate geomorphological processes, paleo land-use, paleo lake extents, etc.
The workshop will consist of a theoretical introduction and a hands-on part with software tools. We will perform the first steps from image selection, pre-processing, visualization to simple analysis. One focus lies on the visual enhancement of the image and discrimination of surface material using (false-colour) band combinations and Principal Component Analysis (PCA). The other focus lies on the application of a set of useful band ratios, which help to distinguish between mineral compositions.
To promote the FOSS (Free and Open Source Software) approach, all exercises of this workshop will be done using QGIS and the remote sensing plugin SCP (Semi-automatic Classification Plugin). The ASTER data, which we will use, are also free of charge for scientific applications
Felix Bachofer, Christian Sommer
W7 Linking Data from Archaeology, the Humanities, and Ecology: Testing Tools to Encourage Data-Driven Interdisciplinary Research
dataARC (www.data-arc.org) links information from the social sciences, natural sciences, and humanities in a data exploration tool explicitly designed for interdisciplinary, synthesis research. The aim is to facilitate and encourage research on long-term human ecodynamics in the North Atlantic that draws on data from multiple, normally disconnected, specialist sources. Project researchers are pursuing questions like how legal and social systems develop to manage resources in an environmentally fragmented landscape, and how we can see the effects of global changes in climate in local patterns of consumption and economic activities. By aiding researchers to find and contextualize specialist data from outside their own expertise, but related to their research, the dataArc tool will enable researchers to address questions like these more robustly.
The data discovery tool is currently in prototype and integrates data from archaeological survey, zooarchaeology, the Icelandic Sagas, historic land registers, and paleoenvironmental records to address broad scale, multidisciplinary questions. Additional outreach tools and stories will be created throughout the project to engage the general public. This workshop will engage CAA professionals to gather feedback on dataARC progress and tool developing. It will provide an overview of DataARC objectives and progress, including the introduction of the prototype data exploration tool for feedback from the wider environmental archaeology community. Attendees will learn how dataARC is approaching linking interdisciplinary data from multiple different approaches, as well as test the tool before public release.
Colleen Strawhacker, Rachel Opitz
GRAVITATE (Geometric reconstruction and novel semantic reunification of cultural heritage objects) is a European funded project (grant: 665155) which is addressed to the world of Cultural Heritage and related sciences. It proposes an innovative approach to the study of heritage artifacts, which includes virtual restoration, classification and attribute analysis. GRAVITATE offers a set of software tools that allows archaeologists, curators and conservators to identify and Re-Unify cultural objects that were separated across collections, allow Re-association between cultural artefacts that have common features (e.g. same school, age, pattern) and eventually Re-Assemble fragments belonging to the same broken artefact. The novelty of its approach lies in the combination and integration of semantic and geometric analysis and matching into a single decision support platform. The main features of the platform consist of queries based on semantic and geometry, analysis and comparison of 3D geometry and semi-automatic re-assembly of fragments.
Attendees will learn about the different functionalities and tools of the platform and how to use them, as well as the archaeological problems that the project is addressing and its implications. Furthermore, possible applications and uses of the platform in the context of cultural heritage will be discussed.
The 3h workshop will be divided in three main parts:
- Introduction to the project and presentation of the platform and its functions
- Hands-on session with exercises
This workshop focuses on the ROCEEH Out of Africa Database (ROAD), an online source of information which brings together data from such varied fields as archaeology, anthropology, paleontology and paleobotany (http://www.roceeh.uni-tuebingen.de/roadweb). The purpose of this workshop is to learn about the structure and content of ROAD by entering data from selected publications into the database. An additional purpose of this workshop will be to learn how to perform a search using predefined queries on the database and then to visualize the search results on maps and in graphs using several tools developed especially for ROAD.
ROAD was conceived to support the interdisciplinary research project called “The Role of Culture in Early Expansions of Humans (ROCEEH)”, which aims to study cultural aspects associated with the human expansions of the last 3 million years (http://www.roceeh.net). The database stores published descriptions of material culture, human fossils, animal and plant remains, as well as geographical information about the localities where these finds were discovered. The description of finds as they are entered in ROAD relies on a standardized vocabulary applied to the project and subsequent publications. The database allows the standardized querying of the database using SQL queries on the back-end of the database. We hope that the participants will gain an overview of ROAD that will motivate them to use ROAD for their own research and to collaborate with the ROCEEH project in the future.
Zara Kanaeva, Andrew W. Kandel
W10 Unlocking the power of (linked) metadata: Documenting, managing and disseminating semantic relations for cultural heritage resources
Linked data and the semantic web are on the increase. So far, however, the huge potential contained in linked data for cultural heritage has only been unlocked to a limited extent. This workshop will address this by showcasing the application of semantic metadata to the cultural heritage field. The workshop will cover the entire lifecycle of metadata: birth/generation, management and dissemination. The application of semantic principles benefits all stages of the cycle, ranging from easier storage of information to better communication of research in a more engaging and intuitive form. Core of the workshop will be a case study regarding the semantic relations pertaining to a castle site in the Netherlands. Participants will be introduced to work already done on the project, after which they will be invited to do some hands-on work with us: adding some relations and narratives, using both standard software and some customized tools.
What attendees will learn:
- insight in current state and possible benefits of the semantic metadata for cultural heritage
- insight in metadata-driven storage, dissemination and visualization of cultural heritage data
- experience with some practical approaches and examples regarding the above
Martijn van der Kaaij
Dealing with 14C dates is one of THE essential activities that occupy many archaeologists today. Oxcal is one impressive instrument for many daily routines with 14C dates. But there are some aspects where a proper statistical tool comes in handy. In this workshop we will explore the possibilities of R to deal with 14C dates.
We will briefly introduce several R packages that are specifically designed for this purpose: rcarbon, ArchaeoPhases and archSeries. Then we will turn to Bchron, a mature and widely-used package, and oxcAAR, our package to connect Oxcal to R. We will conduct simple calibration as well as Bayesian calibration and visualise the results with powerful tools in R.
In the last part of the workshop we will use R as a simulation engine for Oxcal, explore a Bayesian sequence and the confidence intervals for a sum calibration. Data for this bulk analysis can be obtained from openly accessible archives via our R package c14databases.
To fully take advantage of the practical part of the workshop you must bring your own laptop. Please install or update R and RStudio, and install the above mentioned packages (instructions available at https://isaakiel.github.io).
The workshop is primarily aimed at archaeologists who already have basic knowledge of R and radiocarbon data. Nevertheless, participants who do not yet have this experience are also welcome.
This workshop is organized by the ISAAKiel group (Initiative for Statistical Analysis in Archaeology Kiel) https://isaakiel.github.io
Martin Hinz, Clemens Schmid
Archaeologists, historians, and other researchers in the digital humanities and social sciences commonly deal with network data that documents diverse types of relationships. Many of the relationships are multivariate, are naturally embedded in space and changing through time. The main network software packages for analysis and visualization are so far surprisingly limited and hard to use when it comes to dealing with multiple types of relationships, space and time.
This workshop introduces The Vistorian (http://vistorian.net): an intuitive, easy-to-learn online platform that provides interactive visualisations for different kinds of networks. It enables so-called multiplex networks (relationships of different types between the same set of nodes) to be represented as a series of overlapping layers, and the visualisation will dynamically change as they are ticked on or off. Nodes can be assigned geographical locations, such as archaeological sites or even locations within a single trench. The user can easily switch between different representation modes, including a geographical map, adjacency matrix, a timeline visualization and the common node-link visualisation.
Most crucially for archaeological and historical network research, a time-stamp can be assigned to each relationship documenting the time-range within which it existed. This allows for the changing structure of the network to be dynamically explored.
The workshop aims to promote The Vistorian as a simple-to-use tool for network exploration and guides through the individual steps involved in that process. The structure of the workshop will be as follows:
- We will start with a brief overview of the main network terminology and some basic concepts in network visualization. This aims to give everybody unfamiliar with network visualization a solid background.
- We will demonstrate The Vistorian; how to load data, which visualizations exist, how to interact with each visualization, and when to choose witch visualization.
- We will walk participants through a simple archaeological demo data set: how to upload and how to explore it.
- We will give participants an overview of the possibilities and constraints to format their own archaeological data into the formats used by the Vistorian. No programming is required at any stage, rather this is purely a tutorial how to best organize one’s data into excel sheets.
- Finally, we provide time and assistance for an individual exploration of participants’ own data.
The above steps are designed explicitly for an archaeological target audience, by highlighting those features that are most crucial for archaeological network research and illustrating these through an original archaeological case study with a real archaeological dataset.
Delegates should bring their own laptop and charger, and have Google Chrome installed. Delegates who wish to use The Vistorian for their own archaeological data set should bring this along in a structured digitised format (excel spreadsheets, CSV, database). Delegates will need to be able to connect to the internet most of the entire workshop. However, The Vistorian will not upload any data to our servers at any time. Any participants’ data will be stored completely locally.
Benjamin Bach, Tom Brughmans
W13 The basics of deep learning for archaeological site detection on remote sensor data
For the protection of archaeology it is important to know where sites are located. This can be done through manual analysis of remote sensor data, although, the interpretation of this data is biased to the knowledge and interest of the expert and their choice of remote sensor data and image enhancements. Therefore, this field would highly benefit from automation. Recently, we have seen that case studies using deep learning are praised for their accuracy, resilience and their overall potential.
In this workshop we will introduce deep learning with a tutorial using a Convolutional Neural Network (CNN) on both multispectral aerial images and LiDAR-derived terrain models to detect barrow monuments. We will show how to successfully apply this to small datasets even though, traditionally, deep learning is known to require big datasets. This will be demonstrated by the application of data augmentation, optimised networks and transfer learning.
This tutorial should provide the basic tools to further apply this to your own dataset which could be, but is not limited to, remote sensor data. It could be argued that almost every image classification problem, or any data problem for that matter, could benefit from deep learning.
Required equipment: Participants can follow the tutorial on the screen. If they wish to actively participate in the tutorial then they should bring their own laptop and preferably have Python, Keras and Tensorflow installed.
Required skill: Basic algebra and coding experience would help, but beginners are also welcome.
Iris Kramer, Jonathon Hare