The ‚digital revolution‘ quickly changes all aspects of our live. The year 2002, when for the first time humanity stored more information digitally than analogue can be seen as the birth of the ‘digital age’ (Hilbert & López 2011). The digitization also revolutionized everyday life in science. Without digitization it would not have been possible to decipher the human genome, develop complex climate models or to use diagnostic models in medicine (Drenth 2001).
In the face of global problems like the loss of biodiversity, climate and land-use change, large amounts of statistically validated global data with a high temporal and spatial resolution is needed. DIG-IT! engages these challenges in the ecological sciences. In particular the development, services and stability of ecosystems and their reaction to changes in climate and land-use change are questions of high importance to our society (Bonan & Doney 2018). To answer such questions, models need to be developed and then calibrated with ecological data. While the collection of ecological data became much easier over time and can even be done in ‘citizen science projects’ (e.g. digital images of the flowering phenology), the evaluation of vast quantities of image data is still a big challenge. Thus, the problem is not so much the amount of available data, but rather the time of experts that is needed to manually evaluate the data.
Automatic image analysis and recognition (machine learning) promises a ‘quantum leap’ in the respective disciplines by providing more high resolution pre-processed data. Since 2010, the performance of state of the art image-analysis methods for the recognition and classification of objects in image and video data is evaluated annually in a computer vision competition (ILSVRC). In 2012, for the first time a deep convolutional neural network (DCNN) won this competition (AlexNet, Krizhevsky et al. 2012). In contrast to conventional artificial neural networks a DCNN has many more layers (thus ‘deep’), and the structure of such networks partly mimics human vision. Already one year later, in 2013, only DCNN methods were participating in this computer vision competition and the general methodology has been advanced much further since this break-through.
The fundamental difference between the new and the conventional methods is the autonomous learning of the networks. Previously, features for object recognition in images had to be defined manually by the experts, but now they are created automatically as part of the learning process. The research and solution of ecological questions can greatly benefit from this automation. Already now, well defined tasks of object detection can be completed with relatively little training data provided, resulting in an acceleration of several orders of magnitude and comparable accuracy. The saved time facilitates the quick processing of current topics in the ecological research and also allows to tackle certain research topics for the first time, for example the recognition of individuals in animal populations.
Our DIG-IT! team approaches these challenges integratively by connecting mathematics, computer sciences and applied ecological research questions. Thus, development and application interact with clearly defined goals and therefore follow the general aim to harness the digitization for ecological sciences in general.
The project "DIG-IT!" is funded by the European Social Fund and the Ministry for Education, Science and Culture of the federal state Mecklenburg-Vorpommern.