Authors

Topics

Tags

Machine Learning
Data Science

Exploring the Cutting-Edge of AI in Earth Surface Research: A Glimpse into a High-Impact Panel Discussion

Ty Tuff writes about Exploring the Cutting-Edge of AI in Earth Surface Research: A Glimpse into a High-Impact Panel Discussion

As a professional data scientist working at an NSF synthesis center focused on the synergy between biology, computer science, and Earth's surface processes, I often find myself immersed in the intriguing world of AI and its applications. It was an honor to be invited as a panelist for a highly anticipated session at a recent conference, titled "AI in Earth Surface Research - Challenges and Opportunities." This captivating event, held on Thursday, 18th May 2023, from 3:45 to 4:45 pm, proved to be one of the most heavily attended sessions of the entire conference, delving into advanced aspects of AI integration in Earth Surface Research.

I shared the panel stage with esteemed colleagues Paola Passalacqua and Daniel Buscombe and our discussion was expertly moderated by Chris Jenkins. The event was nicely organized by Gregory Tucker and Lynn McCready.

 Following some brief introductions, Chris, our moderator, presented four thought-provoking questions that ignited an engaging and dynamic conversation. I am happy to share my outlines for preparing for those discussions with you here. I hope they offer valuable insights into the world of AI in Earth Surface Research. Here are the four questions that I was provided ahead of time and the outlines I prepared for answering those questions in front of a large audience:

Introduction

Artificial intelligence (AI) has emerged as a powerful tool for advancing the understanding of Earth surface processes and driving innovative solutions to pressing environmental challenges. Earth surface scientists stand to benefit greatly from familiarizing themselves with seminal AI works and their applications across various research domains. Pioneering papers such as LeCun, Bengio, and Hinton's review of deep learning1 and Krizhevsky, Sutskever, and Hinton's groundbreaking work on convolutional neural networks2 have laid the foundation for the integration of AI techniques into Earth surface research. Key advancements, including Hochreiter and Schmidhuber's development of Long Short-Term Memory networks3, and the exploration of geometric deep learning by Bronstein and colleagues4, have further expanded the range of AI applications in areas such as climate dynamics5, hydrological modeling6, geomorphology7, and ecological modeling8. This paper aims to provide Earth surface scientists with an essential understanding of AI techniques, guided by a thread of seminal works, to facilitate interdisciplinary collaboration and contribute to the development of innovative environmental solutions.

  1. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. DOI: 10.1038/nature14539
  2. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). DOI:10.1145/3065386
  3. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. DOI: 10.1162/neco.1997.9.8.1735
  4. Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., & Vandergheynst, P. (2017). Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4), 18-42. DOI: 10.1109/MSP.2017.2693418
  5. Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., & Prabhat. (2019). Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743), 195-204. DOI: 10.1038/s41586-019-0912-1
  6. Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using long short-term memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018
  7. Schmidt, J., Evans, I. S., & Brinkmann, J. (2018). Comparison of polynomial models for land surface curvature calculation. International Journal of Geographical Information Science, 32(2), 324-352. DOI: 10.1080/13658816.2017.1400549
  8. Thuiller, W., Georges, D., Engler, R., & Breiner, F. (2019). Package ‘biomod2’: Ensemble platform for species distribution modeling. URL: https://cran.r-project.org/web/packages/biomod2/index.html

 

AI in Earth Surface Research: Challenges and Opportunities in Integrating Physics-Based and AI Models
Question 1:  How are (physics) numerical/process models being joined with AI models?

       What's the method?
Answer 1: 

In recent years, we've seen a remarkable fusion of physics-based numerical and process models with AI models, and this has truly transformed the field of Earth surface research. There are three main methods that have made this possible: hybrid modeling, data-driven model enhancement, and model emulation.

Now, hybrid modeling is a fantastic approach that combines the solid foundation of physics-based models with the adaptability of AI models. This allows researchers to tackle areas where traditional models might not be as accurate or detailed. By leveraging the strengths of both modeling paradigms, we can achieve improved predictive capabilities and insights.

Data-driven model enhancement is another powerful method that utilizes AI algorithms like deep learning to uncover complex patterns in large datasets. By identifying these hidden patterns, we can enhance the representation of specific processes within physics-based models, ultimately leading to better performance and accuracy.

Lastly, we have model emulation, which is a game-changing technique that employs AI models to replicate the behavior of computationally demanding physics-based models. This significantly reduces the computational costs and time needed for simulations, sensitivity analysis, and uncertainty quantification. This makes research more efficient and accessible.

These innovative methods are pushing the boundaries of Earth surface research, taking advantage of the complementary strengths of both AI and physics-based models. As a result, we're witnessing groundbreaking discoveries and more informed decision-making across a wide range of environmental applications.
In recent years, the integration of artificial intelligence (AI) techniques, specifically machine learning (ML) algorithms, with traditional physics-based numerical and process models has emerged as a promising approach to improve Earth surface research. By combining these two methodologies, researchers can leverage the strengths of both approaches to enhance understanding, prediction, and decision-making in various environmental applications.

  1. Coupling AI and Physics-Based Models: Methods and Strategies

a. Hybrid Modeling: This approach involves the combination of physics-based models with AI algorithms to create an enhanced model. The physics-based model provides the fundamental principles governing the system, while the AI model can fill in the gaps where the physics-based model lacks detail or accuracy1. This method allows researchers to incorporate complex interactions that may be difficult to model explicitly within the physics-based framework. The integration of AI and physics-based models has opened up new avenues for Earth surface research, addressing complex challenges and opportunities. For instance, in landslide prediction, hybrid models have been developed that combine machine learning techniques with physics-based slope stability analysis to provide more accurate assessments of landslide susceptibility2. In hydrological modeling, AI-assisted approaches have been employed to enhance the prediction of river discharge and water resource availability by incorporating non-linear interactions and spatial dependencies that are difficult to capture in traditional models3. Moreover, in geomorphology, researchers have utilized hybrid modeling to study the evolution of landscapes, combining physics-based models with AI algorithms to identify patterns in sediment transport and erosion processes4.

1.     Karpatne, A., Ebert-Uphoff, I., Ravela, S., Babaie, H., & Kumar, V. (2019). Machine Learning for the Geosciences: Challenges and Opportunities. IEEE Transactions on Knowledge and Data Engineering, 31(8), 1544-1554. DOI: 10.1109/TKDE.2018.2868368 

2.     Pham, B. T., Tien Bui, D., Pourghasemi, H. R., Indra, P., & Dholakia, M. B. (2016). Landslide susceptibility assessessment in the Uttarakhand area (India) using GIS: a comparison study of prediction capability of naïve bayes, multilayer perceptron neural networks, and functional trees methods. Theoretical and Applied Climatology, 128(3-4), 255-273. DOI: 10.1007/s00704-015-1678-0 

3.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

4.     Grieve, S. W., Mudd, S. M., & Clubb, F. J. (2020). How does grid-resolution modulate the topographic expression of geomorphic processes? Earth Surface Dynamics, 8(1), 87-106. DOI: 10.5194/esurf-8-87-2020

 

b. Data-Driven Model Enhancement: AI algorithms, such as deep learning, can be used to learn complex patterns from large datasets and improve the representation of specific processes in physics-based models1. For example, AI can be used to parameterize sub-grid scale processes that are difficult to represent explicitly in global climate models. The use of AI algorithms to enhance physics-based models has led to significant advancements in various fields. In global climate modeling, deep learning techniques have been employed to improve the representation of cloud microphysics, leading to more accurate predictions of cloud formation and their impact on climate2. Similarly, in oceanography, AI has been utilized to refine the parameterization of sub-grid scale processes such as eddy mixing and small-scale turbulence, ultimately improving the accuracy of ocean circulation and temperature predictions3. In the realm of atmospheric sciences, machine learning algorithms have been employed to better represent sub-grid scale convection processes, resulting in more accurate weather forecasts and improved understanding of the Earth's energy balance4

1.     Karpatne, A., Ebert-Uphoff, I., Ravela, S., Babaie, H., & Kumar, V. (2019). Machine Learning for the Geosciences: Challenges and Opportunities. IEEE Transactions on Knowledge and Data Engineering, 31(8), 1544-1554. DOI: 10.1109/TKDE.2018.2868368 

2.     Gentine, P., Pritchard, M., Rasp, S., Reinaudi, G., & Yacalis, G. (2018). Could machine learning break the convection parameterization deadlock? Geophysical Research Letters, 45(11), 5742-5751. DOI: 10.1029/2018GL078202 

3.     Bolton, T., & Zanna, L. (2019). Applications of Deep Learning to Ocean Data Inference and Subgrid Parameterization. Journal of Advances in Modeling Earth Systems, 11(1), 376-399. DOI: 10.1029/2018MS001472 

4.     Brenowitz, N. D., & Bretherton, C. S. (2018). Prognostic validation of a neural network unified physics parameterization. Geophysical Research Letters, 45(12), 6289-6298. DOI: 10.1029/2018GL078510 

 

c. Model Emulation: AI models can be trained to emulate the behavior of computationally expensive physics-based models1. This can significantly reduce the computational cost and time, allowing for more efficient simulations, sensitivity analysis, and uncertainty quantification. The use of AI models to emulate computationally expensive physics-based models has demonstrated considerable benefits across various scientific fields. In climate science, AI emulators have been developed to approximate the behavior of complex Earth System Models (ESMs), allowing for more rapid exploration of alternative climate scenarios and mitigation strategies2. In the field of nuclear engineering, machine learning emulators have been employed to replicate the outcomes of computationally demanding simulations of nuclear reactor behavior, enabling more efficient design optimization and safety assessments3. In the area of computational fluid dynamics (CFD), AI-based emulators have been used to approximate high-fidelity simulations of fluid flow, reducing computational cost and time for applications such as aerodynamic design and optimization in the aerospace industry4

1.     Karpatne, A., Ebert-Uphoff, I., Ravela, S., Babaie, H., & Kumar, V. (2019). Machine Learning for the Geosciences: Challenges and Opportunities. IEEE Transactions on Knowledge and Data Engineering, 31(8), 1544-1554. DOI: 10.1109/TKDE.2018.2868368 

2.     Schneider, T., Lan, S., Stuart, A., & Teixeira, J. (2017). Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations. Geophysical Research Letters, 44(24), 12,396-12,417. DOI: 10.1002/2017GL076101 

3.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall-runoff modelling using Long Short-Term Memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

4.     Kutz, J. N., Brunton, S. L., Brunton, B. W., & Proctor, J. L. (2016). Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems. SIAM. DOI: 10.1137/1.9781611974508 

 

  1. Challenges in Integrating AI and Physics-Based Models

a. Data Availability and Quality: To effectively train AI models, large amounts of high-quality, labeled data are required1. However, Earth surface data can be limited in spatial and temporal resolution, and may contain errors or uncertainties. This can impact the effectiveness of AI models when integrated with physics-based models. The integration of AI and physics-based models in Earth surface research has faced various challenges related to data quality and availability. For instance, in the field of remote sensing, the limited temporal and spatial resolution of satellite imagery can hinder the training of AI models to accurately detect and monitor land-use changes or deforestation patterns2. In hydrological modeling, the scarcity of in-situ measurements for river discharge or groundwater levels can limit the ability of AI models to provide accurate predictions when combined with physics-based approaches3. In the realm of seismology, the presence of noise and uncertainties in seismic data can affect the performance of AI models in detecting and locating earthquakes or assessing seismic hazard4.

1.     Karpatne, A., Ebert-Uphoff, I., Ravela, S., Babaie, H., & Kumar, V. (2019). Machine Learning for the Geosciences: Challenges and Opportunities. IEEE Transactions on Knowledge and Data Engineering, 31(8), 1544-1554. DOI: 10.1109/TKDE.2018.2868368 

2.     Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., & Moore, R. (2017). Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment, 202, 18-27. DOI: 10.1016/j.rse.2017.06.031 

3.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

4.     Perol, T., Gharbi, M., & Denolle, M. (2018). Convolutional Neural Network for Earthquake Detection and Location. Science Advances, 4(2), e1700578. DOI: 10.1126/sciadv.1700578 

 

b. Model Interpretability and Trust AI models, particularly deep learning models, can be complex and difficult to interpret1. As a result, integrating them with physics-based models may raise concerns about the transparency and trustworthiness of the combined model. The integration of AI and physics-based models has faced issues related to model interpretability and trust in various research domains. For example, in the field of hydrology, AI models have been used to predict river discharge, but their complex inner workings make it difficult for researchers to understand the underlying mechanisms driving these predictions2. This lack of transparency can create skepticism about the reliability of AI-enhanced hydrological models. In the domain of environmental risk assessment, AI models have been employed to estimate the probability of natural hazards such as landslides or wildfires3. However, the opaque nature of deep learning models can raise concerns about the accuracy and trustworthiness of these risk assessments, especially when used for decision-making by policymakers or emergency management agencies. In climate science, the integration of AI models with Earth System Models may cause apprehension about the validity of climate projections and their implications for climate change mitigation and adaptation policies4.  

1.     Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys (CSUR), 51(5), 1-42. DOI: 10.1145/3236009 

2.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

3.     Zeiler, M. D. (2018). Landslide Susceptibility Mapping Using a Neural Network. In Landslide Dynamics: ISDR-ICL Landslide Interactive Teaching Tools (pp. 483-488). Springer, Cham. DOI: 10.1007/978-3-319-57777-7_34 

4.     Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., & Prabhat. (2019). Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743), 195-204. DOI: 10.1038/s41586-019-0912-1 

 

c. Model Generalizability: AI models can be sensitive to the training data, potentially leading to overfitting or limited generalizability1. Ensuring that AI-enhanced physics-based models can accurately predict outcomes across different locations, timescales, and conditions is critical for their utility in Earth surface research. The generalizability of AI-enhanced physics-based models has presented challenges across various Earth surface research domains. In the field of flood prediction, AI models trained on data from specific regions or river basins may struggle to accurately predict flood events in areas with different climate conditions or hydrological characteristics, potentially limiting their broader applicability2. In the domain of vegetation dynamics modeling, AI models that rely on remote sensing data may face challenges when generalizing to new locations with different land-cover types or climate regimes, affecting the models' ability to accurately simulate vegetation responses to climate change or human interventions3. In the area of air quality modeling, AI models trained on data from urban environments may not generalize well to rural or remote areas, where the sources and dynamics of air pollutants can be substantially different4

1.     Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4, 40-79. DOI: 10.1214/09-SS054 

2.     Dottori, F., Szewczyk, W., Ciscar, J. C., Zhao, F., Alfieri, L., Hirabayashi, Y., ... & Feyen, L. (2018). Increased human and economic losses from river flooding with anthropogenic warming. Nature Climate Change, 8(9), 781-786. DOI: 10.1038/s41558-018-0257-z 

3.     Ganguly, S., Schull, M. A., Samanta, A., Shabanov, N. V., Milesi, C., Nemani, R. R., ... & Myneni, R. B. (2010). Generating global leaf area index from Landsat: Algorithm formulation and demonstration. Remote Sensing of Environment, 114(9), 1853-1865. DOI: 10.1016/j.rse.2010.03.018 

4.     Di, Q., Kloog, I., Koutrakis, P., Lyapustin, A., Wang, Y., & Schwartz, J. (2016). Assessing PM2.5 Exposures with High Spatiotemporal Resolution across the Continental United States. Environmental Science & Technology, 50(9), 4712-4721. DOI: 10.1021/acs.est.5b06121 

 

  1. Opportunities for AI in Earth Surface Research

a. Enhanced Prediction and Understanding: Integrating AI with physics-based models has the potential to improve predictions of Earth surface processes, enhance understanding of complex interactions, and support decision-making in environmental management and policy1.

The integration of AI and physics-based models has opened up new opportunities for Earth surface research across various domains. In the field of hydrology, AI-enhanced models have improved the forecasting of extreme events such as floods and droughts, supporting water resource management and risk reduction efforts2. In the area of natural hazard assessment, AI-driven models have enabled more accurate predictions of events like earthquakes, landslides, and volcanic eruptions, facilitating more effective disaster preparedness and response strategies3. In the domain of agriculture, AI-enhanced models have been used to predict crop yields, soil health, and the impacts of climate change on agricultural production, helping inform sustainable farming practices and food security policies4. In the realm of urban planning, AI-powered models have been employed to assess the impacts of land-use changes on local environments, guiding the development of more resilient and sustainable cities5

1.     Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., & Prabhat. (2019). Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743), 195-204. DOI: 10.1038/s41586-019-0912-1 

2.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

3.     Intrieri, E., Gigli, G., Mugnai, F., Fanti, R., & Casagli, N. (2018). Design and implementation of a landslide early warning system. Engineering Geology, 241, 105-117. DOI: 10.1016/j.enggeo.2012.07.017

4.     Hasan, M. M., Chopin, S. F., Laga, H., & Miklavcic, S. J. (2018). Detection and analysis of wheat spikes using convolutional neural networks. Plant Methods, 14, 100. DOI: 10.1186/s13007-018-0366-8 

5.     Ching, J., & Mills, G. (2018). Urban form and function as building performance parameters. Building and Environment, 137, 1-13. DOI: 10.1016/j.buildenv.2018.03.037 

 

b. Efficient Computational Methods: AI-driven model emulation and enhancement can help researchers overcome the computational challenges associated with large-scale, high-resolution Earth surface simulations1.

The application of AI-driven model emulation and enhancement has provided significant opportunities for overcoming computational challenges in various Earth surface research domains. In climate science, AI-based emulators have facilitated efficient exploration of multiple climate scenarios, enabling a broader assessment of climate change impacts and the effectiveness of mitigation strategies2. In the field of geophysics, AI-enhanced models have been used to accelerate large-scale, high-resolution simulations of tectonic processes and mantle convection, providing insights into the dynamics of the Earth's interior without the need for prohibitively expensive computations3. In the area of hydrological modeling, AI-driven techniques have been employed to enable rapid and efficient simulations of groundwater flow, contaminant transport, and watershed dynamics, supporting more timely and informed water resource management decisions4. In the realm of ecosystem modeling, AI-powered model enhancement has been used to overcome computational limitations in simulating biogeochemical processes and vegetation dynamics across large spatial scales and fine resolutions5. These examples demonstrate the potential of AI-driven methods to enable more efficient computational approaches in Earth surface research, paving the way for more comprehensive and detailed analyses of complex natural processes.

1.     Rasp, S., Pritchard, M. S., & Gentine, P. (2018). Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences, 115(39), 9684-9689. DOI: 10.1073/pnas.1810286115 

2.     Weyn, J. A., Durran, D. R., & Caruana, R. (2019). Can machines learn to predict weather? Using deep learning to predict gridded 500-hPa geopotential height from historical weather data. Journal of Advances in Modeling Earth Systems, 11(8), 2680-2693. DOI: 10.1029/2019MS001705 

3.     Bozdag, E., Peter, D., Lefebvre, M., Komatitsch, D., Tromp, J., Hill, J., ... & Lei, W. (2016). Global adjoint tomography: first-generation model. Geophysical Journal International, 207(3), 1739-1766. DOI: 10.1093/gji/ggw356 

4.     Kratzert, F., Klotz, D., Shalev, G., Klambauer, G., Hochreiter, S., & Nearing, G. S. (2019). Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets. Hydrology and Earth System Sciences, 23(12), 5089-5110. DOI: 10.5194/hess-23-5089-2019 

5.     Fisher, R. A., Koven, C. D., Anderegg, W. R. L., Christoffersen, B. O., Dietze, M. C., Farrior, C. E., ... & Knox, R. G. (2018). Vegetation demographics in Earth System Models: A review of progress and priorities. Global Change Biology, 24(1), 35-54. DOI: 10.1111/gcb.13910

 

c. Interdisciplinary Collaboration: The integration of AI and physics-based models in Earth surface research encourages collaboration between computer scientists, environmental scientists, and other domain experts, fostering the development of innovative solutions to pressing environmental challenges1.

The integration of AI and physics-based models has spurred interdisciplinary collaboration across various Earth surface research domains, leading to innovative solutions for pressing environmental challenges. In the field of climate science, collaborations between computer scientists, atmospheric scientists, and oceanographers have enabled the development of AI-enhanced Earth system models that can better capture complex climate feedbacks and improve long-term climate projections2. In the domain of hydrology, partnerships between AI experts, hydrologists, and remote sensing specialists have facilitated the development of AI-driven models that can more accurately predict flood extent and inundation patterns, informing disaster risk management and mitigation strategies3. In the realm of ecology, interdisciplinary teams comprising computer scientists, ecologists, and conservation biologists have worked together to create AI models that can effectively identify biodiversity hotspots and prioritize areas for habitat restoration or species conservation efforts4. In the area of geophysics, collaborations between AI researchers, seismologists, and geologists have led to the development of advanced AI models that can efficiently detect and characterize seismic events, contributing to a better understanding of earthquake processes and informing seismic hazard assessment5.

1.     Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., & Prabhat (2019). Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743), 195-204. DOI: 10.1038/s41586-019-0912-1 

2.     Rasp, S., Pritchard, M. S., & Gentine, P. (2018). Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences, 115(39), 9684-9689. DOI: 10.1073/pnas.1810286115 

3.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

4.     Jetz, W., McGeoch, M. A., Guralnick, R., Ferrier, S., Beck, J., Costello, M. J., ... & Owens, I. P. (2019). Essential biodiversity variables for mapping and monitoring species populations. Nature Ecology & Evolution, 3(4), 539-551. DOI: 10.1038/s41559-019-0826-1 

5.     Perol, T., Gharbi, M., & Denolle, M. (2018). Convolutional neural network for earthquake detection and location. Science Advances, 4(2), e1700578. DOI: 10.1126/sciadv.1700578

 

Challenges in Assembling Training Datasets for AI in Earth Surface Research

Question 2: Tell us about some of the tricky issues in assembling Training Datasets ...

For instance:

 (a) imputation of missing data values? 

 (b) How much (or little!) training data is needed?

                        (c) ranges of inputs 

                        (d) generative models

Answer 2: 

Assembling training datasets for machine learning applications in Earth surface research is a critical and complex task that demands our full attention and thoughtful planning. One significant challenge we must overcome is handling missing data values, which can arise from incomplete observations or sensor errors. By skillfully applying imputation techniques like mean imputation, linear interpolation, or even more advanced methods such as k-nearest neighbors imputation, we can confidently fill in these gaps and preserve the integrity of our dataset.

Another crucial aspect to consider is determining the optimal amount of training data. Striking the perfect balance is vital, as the required volume of data depends on the complexity of the problem, the machine learning algorithm, and the desired prediction accuracy. While larger datasets can strengthen the model's ability to generalize and prevent overfitting, acquiring high-quality data can strain resources. As researchers, we must thoughtfully balance the trade-offs between data volume and practical constraints.

Finally, we often face the challenge of navigating diverse ranges of inputs in Earth surface research. Input data from various sources can exhibit different scales, units, or distributions, which could potentially impact machine learning model performance. By preprocessing the data through normalization or standardization, we can ensure that input features are on a comparable scale. 

These preprocessing steps not only mitigate biased learning but also enhance the overall effectiveness of the model, driving more reliable and insightful results in Earth surface research. By carefully addressing these challenges, we can unlock the full potential of machine learning models in Earth surface research, leading to groundbreaking discoveries and innovative solutions to pressing environmental issues.

Assembling high-quality training datasets is crucial for the successful application of AI techniques in Earth surface research. However, there are several challenges researchers must overcome to ensure the effectiveness of AI models when integrated with physics-based models. 

 

1.     Imputation of Missing Data Values

a. Nature of Missing Data: Missing data in Earth surface research can be due to various reasons, including sensor malfunctions, data corruption, or gaps in data collection. The nature of the missing data can impact the choice of imputation method1.

The task of imputing missing data values in Earth surface research presents various challenges across different domains. In the field of meteorology, missing data in weather station records due to equipment failures or human errors can complicate the training of AI models for weather forecasting or climate analysis. Imputing these missing values requires careful consideration of the temporal and spatial correlations in the data to maintain the accuracy of the AI-enhanced models2. In the area of remote sensing, gaps in satellite coverage or data corruption due to cloud cover or atmospheric interference can lead to incomplete datasets, making it challenging to train AI models for tasks like land cover classification or vegetation monitoring. Imputing these missing values requires advanced techniques that take into account the spatial context and the spectral characteristics of the observed features3. In the realm of hydrological modeling, incomplete or irregularly sampled datasets for river discharge or groundwater levels can pose challenges for training AI models to predict water resource availability or flood events. To address these issues, researchers must develop and apply suitable imputation methods that consider the underlying physical processes and the spatiotemporal dynamics of the hydrological system4. These examples emphasize the importance of addressing the challenges associated with missing data imputation when assembling training datasets for AI applications in Earth surface research.

1.     Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data. John Wiley & Sons. DOI: 10.1002/9781119482260 

2.     Azur, M. J., Stuart, E. A., Frangakis, C., & Leaf, P. J. (2011). Multiple imputation by chained equations: what is it and how does it work?. International journal of methods in psychiatric research, 20(1), 40-49. DOI: 10.1002/mpr.329 

3.     Gómez, C., White, J. C., & Wulder, M. A. (2016). Optical remotely sensed time series data for land cover classification: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 116, 55-72. DOI: 10.1016/j.isprsjprs.2016.03.008 

4.     Liu, Y., Gupta, H., Springer, E., & Wagener, T. (2008). Linking science with environmental decision making: Experiences from an integrated modeling approach to supporting sustainable water resources management. Environmental Modelling & Software, 23(7), 846-858. DOI: 10.1016/j.envsoft.2007.10.007

 

b. Imputation Techniques: Several methods can be used to impute missing data, such as mean or median imputation, regression-based imputation, or more advanced techniques like k-nearest neighbors and multiple imputation. The choice of imputation technique depends on the specific dataset, the type and extent of missing data, and the intended application of the AI model1.

The use of various imputation techniques has been demonstrated in different Earth surface research domains to address missing data challenges. In the field of climate science, mean or median imputation has been employed to fill in missing values in temperature and precipitation records, helping to maintain continuity in long-term climate datasets2. In the realm of air quality monitoring, regression-based imputation has been applied to estimate missing data in air pollutant concentrations, taking advantage of the relationships between pollutant levels and meteorological variables3. In the area of soil science, k-nearest neighbors imputation has been used to estimate missing soil property values by considering the similarity of soil samples in terms of spatial location and other environmental attributes4. In the domain of hydrogeology, multiple imputation techniques have been employed to address missing data in groundwater level measurements, leveraging the spatial and temporal dependencies in the data to generate more accurate imputations. These examples illustrate the application of diverse imputation techniques to tackle missing data issues in assembling training datasets for AI models in Earth surface research, highlighting the importance of selecting appropriate methods based on the specific dataset and application context.

1.     Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data. John Wiley & Sons. DOI: 10.1002/9781119482260 

2.     McKinnon, K. A., Rhines, A., Tingley, M. P., & Huybers, P. (2016). The changing shape of Northern Hemisphere summer temperature distributions. Journal of Geophysical Research: Atmospheres, 121(15), 8849-8868. DOI: 10.1002/2016JD025292 

3.     Grange, S. K., & Carslaw, D. C. (2019). Using meteorological normalisation to detect interventions in air quality time series. Science of the Total Environment, 653, 578-588. DOI: 10.1016/j.scitotenv.2018.10.344

4.     Padarian, J., Minasny, B., & McBratney, A. B. (2019). Machine learning and soil sciences: a review aided by machine learning tools. Soil, 5(1), 83-99. DOI: 10.5194/soil-5-83-2019

 

c. Impact on Model Performance: Imputed data can introduce uncertainties and biases into the dataset, which can impact the performance and reliability of AI models. Researchers must carefully assess the implications of imputed data on their models and consider alternative methods if necessary1.

The introduction of uncertainties and biases through imputed data can have varying effects on the performance and reliability of AI models in different Earth surface research domains. In the field of forest fire prediction, imputed data in vegetation indices or meteorological variables can lead to inaccuracies in AI-driven fire risk assessments, potentially affecting the effectiveness of wildfire management strategies2. In the domain of glacier monitoring, biases introduced by imputed data in satellite-derived surface elevation or mass balance measurements can impact the performance of AI models in predicting glacier retreat and its contribution to sea-level rise, with implications for coastal communities and infrastructure planning3. In the realm of hydrological modeling, uncertainties stemming from imputed data in precipitation or evapotranspiration records can affect AI-enhanced models' ability to accurately simulate water balance and runoff dynamics, potentially compromising water resource management decisions4. In the area of land cover change detection, inaccuracies in imputed remote sensing data can result in misclassification or overfitting of AI models, leading to unreliable predictions of land cover transitions and their impacts on ecosystems and biodiversity5. These examples emphasize the importance of carefully assessing the implications of imputed data on AI model performance in Earth surface research and considering alternative methods or additional validation steps when necessary to ensure the reliability and robustness of the models.

1.     Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data. John Wiley & Sons. DOI: 10.1002/9781119482260 

2.     Chuvieco, E., Aguado, I., & Yebra, M. (2010). Integration of ecological and socio-economic factors to assess global vulnerability to wildfire. Global Ecology and Biogeography, 19(1), 62-73. DOI: 10.1111/geb.12095

3.     Dehecq, A., Gourmelen, N., Gardner, A. S., Brun, F., Goldberg, D., Nienow, P. W., ... & Trouvé, E. (2019). Twenty-first century glacier slowdown driven by mass loss in High Mountain Asia. Nature Geoscience, 12(1), 22-27. DOI: 10.1038/s41561-018-0271-9 

4.     Rakovec, O., Kumar, R., Attinger, S., & Samaniego, L. (2016). Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resources Research, 52(10), 7779-7792. DOI: 10.1002/2016WR019430 

5.     Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L., Xu, F., & Fraundorfer, F. (2017). Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine, 5 DOI: 10.1109/MGRS.2017.2762307

 

2.     Quantity of Training Data

a. Data Requirements: AI models, particularly deep learning models, require large amounts of training data to perform well. However, Earth surface data may be limited in spatial or temporal resolution, leading to challenges in assembling sufficient training data1.

The limited availability of spatial or temporal resolution data in Earth surface research has presented challenges for training AI models across various domains. In the field of permafrost modeling, the scarcity of in-situ measurements of soil temperature, moisture, and ground ice content can limit the ability of AI models to provide accurate predictions when combined with physics-based approaches2. In the realm of oceanographic research, the sparsity of oceanographic observations, such as temperature, salinity, and current profiles, can hinder the development of AI-driven ocean circulation models and their ability to capture complex ocean dynamics3. In the area of geophysics, the limited availability of high-quality seismic data can restrict the training of AI models for tasks like earthquake detection, event localization, and seismic hazard assessment4. In the domain of land surface modeling, the scarcity of high-resolution, long-term datasets for land surface properties, such as soil moisture, land cover, and vegetation dynamics, can pose challenges for training AI models that aim to improve the representation of land surface processes in Earth system models5. These examples highlight the importance of addressing the challenges associated with the quantity of training data when developing AI models for Earth surface research to ensure their effectiveness and reliability.

1.     Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. URL 

2.     Langer, M., Westermann, S., Muster, S., Piel, K., & Boike, J. (2011). The surface energy balance of a polygonal tundra site in northern Siberia—Part 2: Winter. The Cryosphere, 5(2), 509-524. DOI: 10.5194/tc-5-509-2011 

3.     Raghukumar, K., Edwards, C. A., Goebel, J., Broquet, G., Veneziani, M., Moore, A. M., & Zehr, J. P. (2017). Impact of assimilating physical oceanographic data on modeled ecosystem dynamics in the California Current System. Progress in Oceanography, 151, 174-200. DOI: 10.1016/j.pocean.2017.01.004

4.     Ross, Z. E., Meier, M. A., & Hauksson, E. (2018). P wave arrival picking and first-motion polarity determination with deep learning. Journal of Geophysical Research: Solid Earth, 123(6), 5120-5129. DOI: 10.1029/2017JB015251 

5.     Koster, R. D., & Suarez, M. J. (2003). Impact of land surface initialization on seasonal precipitation and temperature prediction. Journal of Hydrometeorology, 4(3), 408-423. DOI: 10.1175/1525-7541(2003)004<0408:IOLSIO>2.0.CO;2 

 

b. Transfer Learning and Data Augmentation: Techniques such as transfer learning, where AI models are pretrained on similar tasks or data, can help alleviate the need for extensive training data1. Data augmentation, which involves artificially increasing the size of the dataset through techniques like rotation, scaling, or flipping, can also be used to boost the training dataset's size2.

The use of transfer learning and data augmentation techniques has been demonstrated in various Earth surface research domains to overcome challenges related to limited training data. In the field of remote sensing, transfer learning has been applied to train AI models for land cover classification and change detection tasks by leveraging pretrained models from similar datasets or regions, thereby reducing the need for extensive labeled data3. In the domain of atmospheric science, data augmentation techniques such as rotation, scaling, or flipping have been employed to increase the size of training datasets for cloud classification and weather pattern recognition tasks, enhancing AI model performance and generalizability4. In the area of geophysics, transfer learning has been used to improve the training of AI models for seismic event detection and classification by leveraging models pretrained on large, diverse seismic datasets from various regions and tectonic settings5. In the realm of ecology, data augmentation methods have been applied to artificially expand datasets of species distribution or habitat suitability, boosting the training and predictive performance of AI models for biodiversity assessment and conservation planning6. These examples illustrate the potential of transfer learning and data augmentation techniques to address challenges related to limited training data in AI applications for Earth surface research, ultimately improving the effectiveness and reliability of AI models in these domains.

1.     Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in neural information processing systems (pp. 3320-3328). URL 

2.     Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 60. DOI: 10.1186/s40537-019-0197-0 

3.     Jean, N., Burke, M., Xie, M., Davis, W. M., Lobell, D. B., & Ermon, S. (2016). Combining satellite imagery and machine learning to predict poverty. Science, 353(6301), 790-794. DOI: 10.1126/science.aaf7894 

4.     LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. DOI: 10.1038/nature14539 

5.     Perol, T., Gharbi, M., & Denolle, M. (2018). Convolutional neural network for earthquake detection and location. Science Advances, 4(2), e1700578. DOI: 10.1126/sciadv.1700578 

6.     Wäldchen, J., & Mäder, P. (2018). Machine learning for image based species identification. Methods in Ecology and Evolution, 9(11), 2216-2225. DOI: 10.1111/2041-210X.13075

 

c. Balancing Data Quantity and Quality: Researchers must strike a balance between the quantity of training data and the quality of the data1. Assembling larger training datasets may introduce noise or errors that can impact AI model performance.

Striking the right balance between data quantity and quality has been a challenge in various Earth surface research domains when developing AI models. In the field of hydrology, increasing the size of training datasets for AI-driven flood prediction models may introduce noise from heterogeneous data sources, such as river discharge measurements from different monitoring networks, affecting the model's accuracy and reliability2. In the domain of land surface modeling, expanding training datasets for AI models by incorporating more remote sensing data may introduce errors due to differences in sensor characteristics, image processing techniques, or atmospheric corrections, potentially impacting the models' ability to accurately simulate land surface processes3. In the area of air quality modeling, combining large datasets from various air pollution monitoring networks to train AI models may introduce biases and uncertainties due to varying measurement techniques, instrument calibrations, or local emission factors, affecting the model's performance in predicting pollutant concentrations4. In the realm of natural hazard assessment, assembling larger training datasets for AI-driven landslide susceptibility models may involve combining data from multiple landslide inventories with varying levels of detail, spatial accuracy, or temporal coverage, potentially impacting the model's ability to provide reliable hazard predictions5. These examples emphasize the importance of carefully balancing data quantity and quality when developing AI models for Earth surface research, ensuring that the assembled training datasets support robust and accurate model performance.

1.     Provost, F., & Fawcett, T. (2013). Data Science for Business: What you need to know about data mining and data-analytic thinking. O'Reilly Media, Inc. URL ↩

2.     Taormina, R., & Chau, K. W. (2015). Data-driven input variable selection for rainfall–runoff modeling using binary-coded particle swarm optimization and extreme learning machines. Journal of Hydroinformatics, 17(5), 748-759. DOI: 10.2166/hydro.2015.058 ↩

3.     Zhang, H. K., Roy, D. P., Yan, L., Li, Z., Huang, H., Vermote, E., ... & Kovalskyy, V. (2016). Characterization of Sentinel-2A and Landsat-8 top of atmosphere, surface, and nadir BRDF adjusted reflectance and NDVI differences. Remote Sensing of Environment, 215, 482-494. DOI: 10.1016/j.rse.2018.06.012 ↩

4.     Grange, S. K., & Carslaw, D. C. (2019). Using meteorological normalisation to detect interventions in air quality time series. Science of The Total Environment, 653, 578-588. DOI: 10.1016/j.scitotenv.2018.10.344 ↩

5.     Hong, H., Pourghasemi, H. R., & Pourtaghi, Z. S. (2016). Landslide susceptibility assessment in Lianhua County (China): A comparison between a random forest data mining technique and bivariate and multivariate statistical models. Geomorphology, 259, 105-118. DOI: 10.1016/j.geomorph.2016.02.012 ↩

 

3.     Ranges of Inputs

a. Input Distribution: AI models are sensitive to the distribution of input data1. Researchers must ensure that the training dataset covers a wide range of conditions and scenarios to improve the model's generalizability.

Ensuring a wide range of conditions and scenarios in the training dataset is crucial for the generalizability of AI models across various Earth surface research domains. In the field of climate science, training AI models for extreme weather event prediction may require a comprehensive dataset that includes various climate conditions, such as El Niño and La Niña events, to improve the model's ability to predict extreme events under different climate states2. In the domain of geomorphology, AI-driven models for predicting coastal erosion patterns need to consider various factors, such as wave climate, sediment supply, and human interventions, requiring a diverse dataset that encompasses these varying conditions3. In the area of hydrological modeling, AI models for simulating groundwater flow and contaminant transport should be trained on datasets that cover a wide range of hydrogeological settings, recharge conditions, and pollution sources to enhance their generalizability across different aquifer systems4. In the realm of ecological modeling, AI-driven models for predicting species distribution or habitat suitability need to be trained on datasets that capture the full range of environmental conditions, such as temperature, precipitation, and land cover, to improve their ability to predict species responses to climate change and land-use change scenarios5. These examples highlight the importance of addressing the challenges associated with input distribution when developing AI models for Earth surface research, ensuring that the training datasets adequately cover a wide range of conditions and scenarios to enhance model generalizability and reliability.

1.     Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. URL 

2.     McGovern, A., Lagerquist, R., Gagne, D. J., Jergensen, G. E., Elmore, K. L., Homeyer, C. R., & Smith, T. (2017). Using artificial intelligence to improve real-time decision-making for high-impact weather. Bulletin of the American Meteorological Society, 98(10), 2073-2090. DOI: 10.1175/BAMS-D-16-0123.1

3.     Harley, M. D., Turner, I. L., Short, A. D., & Ranasinghe, R. (2011). A reevaluation of coastal embayment rotation: The dominance of cross-shore versus alongshore sediment transport processes, Collaroy-Narrabeen Beach, southeast Australia. Journal of Geophysical Research: Earth Surface, 116(F4). DOI: 10.1029/2011JF002024 

4.     Dhar, A., & Datta, B. (2007). Saltwater intrusion management of coastal aquifers. I. Linked simulation-optimization. Journal of Hydrologic Engineering, 12(6), 605-616. DOI: 10.1061/(ASCE)1084-0699(2007)12:6(605) ↩

5.     Young, N., Carter, L., & Evangelista, P. (2017). A MaxEnt model v3.4.1 tutorial (ArcGIS v10) – modeling species geographic distributions. DOI: 10.13140/RG.2.2.29580.51845/1 ↩

 

b. Feature Scaling and Normalization: Inputs with different scales or units can impact the performance of AI models1. Researchers need to preprocess the data using techniques like feature scaling or normalization to ensure that all input features have comparable ranges.

The use of feature scaling and normalization techniques has been demonstrated in various Earth surface research domains to address challenges related to varying input scales and units in AI models. In the field of climate science, normalizing temperature and precipitation data, which have different units and ranges, is crucial when training AI models for predicting seasonal climate anomalies or long-term climate trends2. In the domain of remote sensing, feature scaling is essential when combining multispectral and hyperspectral data from different sensors, ensuring that the input features have comparable ranges and contribute equally to the AI-driven classification or regression tasks3. In the area of hydrological modeling, normalizing inputs like precipitation, temperature, soil moisture, and land cover data is vital when training AI models to simulate surface runoff, evapotranspiration, or groundwater recharge processes4. In the realm of geophysics, feature scaling is necessary when combining seismic data from different networks or instruments with varying amplitude ranges and frequency content to train AI models for earthquake detection, event classification, and seismic hazard assessment5. These examples illustrate the importance of applying feature scaling and normalization techniques to preprocess input data when developing AI models for Earth surface research, ensuring that all input features have comparable ranges and contribute appropriately to the model's performance.

1.     Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. URL 

2.     Kim, H. M., Webster, P. J., & Curry, J. A. (2012). Seasonal prediction skill of ECMWF System 4 and NCEP CFSv2 retrospective forecast for the Northern Hemisphere Winter. Climate Dynamics, 39(12), 2957-2973. DOI: 10.1007/s00382-012-1364-6

3.     Zhu, X. X., Tuia, D., Mou, L., Xia, G.-S., Zhang, L., Xu, F., & Fraundorfer, F. (2017). Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine, 5(4), 8-36. DOI: 10.1109/MGRS.2017.2762307

4.     Govindaraju, R. S. (2000). Artificial neural networks in hydrology. I: Preliminary concepts. Journal of Hydrologic Engineering, 5(2), 115-123. DOI: 10.1061/(ASCE)1084-0699(2000)5:2(115)

5.     Perol, T., Gharbi, M., & Denolle, M. (2018). Convolutional neural network for earthquake detection and location. Science Advances, 4(2), e1700578. DOI: 10.1126/sciadv.1700578 

 

c. Extrapolation and Generalization: AI models may struggle to perform well when extrapolating beyond the range of inputs in the training dataset1. Ensuring that the dataset contains a diverse range of inputs is essential for the AI model's ability to generalize and accurately predict Earth surface processes across different conditions.

Examples of the challenges faced by AI models in Earth surface research when extrapolating beyond the range of inputs in the training dataset include the following: In the field of climate science, AI models trained to predict extreme weather events might struggle to make accurate predictions for previously unseen weather patterns if the training dataset does not cover a wide range of atmospheric conditions2. In the domain of hydrology, AI-driven models for flood forecasting may fail to predict extreme flood events accurately if the training data does not include a diverse set of historical flood events with varying magnitudes and spatial extents3. In the area of ecology, AI models for predicting species distribution under climate change scenarios may produce unreliable projections if the training dataset does not capture the full range of potential climate conditions and species responses4. In the realm of remote sensing, AI models for land cover classification may suffer from reduced accuracy when applied to new regions with different land cover types or spectral characteristics not represented in the training dataset5. These examples highlight the importance of including a diverse range of inputs in the training dataset when developing AI models for Earth surface research, ensuring their ability to generalize and accurately predict Earth surface processes across different conditions.

1.     Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. URL 

2.     McGovern, A., Lagerquist, R., Gagne, D. J., Jergensen, G. E., Elmore, K. L., Homeyer, C. R., & Smith, T. (2017). Using artificial intelligence to improve real-time decision-making for high-impact weather. Bulletin of the American Meteorological Society, 98(10), 2073-2090. DOI: 10.1175/BAMS-D-16-0123.1

3.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using long short-term memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

4.     Thuiller, W., Georges, D., Engler, R., & Breiner, F. (2019). Package ‘biomod2’: Ensemble platform for species distribution modeling. R package version 3.3-7.1. URL 

5.     Poblete, T., Navas-Cortes, J. A., Camino, C., Calderon, R., Hornero, A., Gonzalez-Dugo, V., ... & Zarco-Tejada, P. J. (2021). Discriminating Xylella fastidiosa from Verticillium dahliae infections in olive trees using thermal-and hyperspectral-based plant traits. ISPRS Journal of Photogrammetry and Remote Sensing179, 133-144. DOI: 10.1016/j.isprsjprs.2021.07.014

 

 Current AI Developments of Interest for Earth Surface Dynamics Modelers

Question 3: What developments in AI should Earth Surface Dynamics modelers be taking particular notice of, right now?

 

Answer 3: 

Earth Surface Dynamics modelers should be particularly attentive to the advancements in AI that enable forecasting of future earth surface phenomena and estimation of uncertainty in earth surface processes. Groundbreaking deep learning techniques like Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Graph Neural Networks (GNNs) offer unparalleled capabilities to decipher complex patterns and relationships in vast, high-dimensional datasets, which can help modelers predict future events and understand the associated uncertainties.

Furthermore, the integration of physics-based knowledge with AI models—known as physics-informed machine learning or hybrid modeling—has unlocked new possibilities for more accurate and interpretable predictions that respect the fundamental laws governing Earth surface processes. This synergy between data-driven approaches and domain-specific expertise is truly transformative.

Unsupervised and semi-supervised learning techniques are also gaining traction, as they tackle the ongoing challenge of limited labeled data in Earth surface research. By leveraging the structure of unlabeled data or making use of a small amount of labeled data, these techniques lay the groundwork for more robust models that excel at generalizing to new scenarios and estimating uncertainties in their predictions.

Finally, AI's rise in Earth surface research has sparked innovations in data processing and fusion, allowing for the seamless integration of multi-source, multi-temporal, and multi-scale data. This breakthrough unleashes the full potential of diverse Earth observation datasets, propelling the performance of AI models to new heights and improving their ability to forecast future earth surface phenomena.

By embracing these cutting-edge developments and incorporating them into their research, Earth Surface Dynamics modelers can revolutionize the field, foster interdisciplinary collaboration, and contribute to innovative solutions for our planet's most pressing environmental challenges.

Earth Surface Dynamics modelers should pay close attention to several emerging AI developments that could significantly impact and enhance their modeling capabilities. Some of these promising developments include:

 

  1. Deep Learning Architectures for Earth Surface Dynamics: These are advanced AI techniques, specifically deep learning models, designed for researchers and modelers who aim to be at the forefront of scientific developments in Earth surface dynamics1. Deep learning architectures, such as Convolutional Neural Networks (CNNs)2, Recurrent Neural Networks (RNNs)3, Long Short-Term Memory (LSTM) networks4, and Graph Neural Networks (GNNs)5, among others, are capable of capturing complex patterns and relationships in large datasets. These architectures have been increasingly employed in Earth surface research to improve the representation, understanding, and prediction of various Earth surface processes, such as climate dynamics6, hydrological modeling6, geomorphology7, and ecological modeling8. By adopting these cutting-edge deep learning techniques, modelers can enhance the performance of their simulations, facilitate interdisciplinary collaboration, and contribute to the development of innovative solutions for pressing environmental challenges.

1.     LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. DOI: 10.1038/nature14539

2.     Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). URL

3.     Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. DOI: 10.1162/neco.1997.9.8.1735

4.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using long short-term memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018

5.     Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., & Vandergheynst, P. (2017). Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4), 18-42. DOI: 10.1109/MSP.2017.2693418

6.     Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., & Prabhat. (2019). Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743), 195-204. DOI: 10.1038/s41586-019-0912-1

7.     Pelletier, J. D. (2018). A robust, two-parameter method for the extraction of drainage networks from high-resolution digital elevation models (DEMs): Evaluation using synthetic and real-world DEMs. Water Resources Research, 54(1), 75-89. DOI: 10.1002/2017WR021085

8.     Schmidt, J., Evans, I. S., & Brinkmann, J. (2018). Comparison of polynomial models for land surface curvature calculation. International Journal of Geographical Information Science, 32(2), 324-352 DOI:10.1080/13658810310001596058

 

a. Convolutional Neural Networks (CNNs): These models are particularly effective in processing grid-based data and are well-suited for tasks like image classification, segmentation, and object detection, which can be applied to Earth surface dynamics tasks such as land cover classification and terrain analysis1.

Convolutional Neural Networks (CNNs) are a class of deep learning architectures specifically designed to analyze and process grid-based data, such as images, by utilizing convolutional layers to recognize local patterns and features within the input2. CNNs have proven to be highly effective in tasks like image classification, segmentation, and object detection due to their ability to learn hierarchical representations and spatial invariance. In the context of Earth surface dynamics, CNNs have found numerous applications, providing valuable insights and improvements in various research domains. For example, in the field of remote sensing, CNNs have been employed for land cover classification, enabling the automated identification of different land use types, such as forests, urban areas, and agriculture, from multispectral or hyperspectral imagery3. In terrain analysis, CNNs have been used to extract geomorphological features, such as drainage networks, slope patterns, and geological structures, from high-resolution Digital Elevation Models (DEMs)4. These examples illustrate the versatility and effectiveness of Convolutional Neural Networks in addressing a wide range of Earth surface dynamics tasks, offering significant potential for enhancing the understanding and prediction of complex environmental processes.

1.     LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. DOI: 10.1038/nature14539 

2.     Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). URL 

3.     Zhang, L., Zhang, L., & Du, B. (2016). Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geoscience and Remote Sensing Magazine, 4(2), 22-40. DOI: 10.1109/MGRS.2016.2540798

4.     Pelletier, J. D. (2018). A robust, two-parameter method for the extraction of drainage networks from high-resolution digital elevation models (DEMs): Evaluation using synthetic and real-world DEMs. Water Resources Research, 54(1), 75-89. DOI: 10.1002/2017WR021085

 

b. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) Networks: These models excel in processing sequential data and can be applied to time series analysis, which is essential for Earth surface dynamics modeling in areas like hydrology, climate, and erosion processes1.

RNNs are a class of deep learning architectures designed to handle sequential data by maintaining a hidden state that can capture information from previous time steps in the sequence2. LSTM networks are a specialized type of RNN that incorporates memory cells and gating mechanisms, allowing them to effectively learn and remember long-term dependencies in the data3. Both RNNs and LSTM networks excel in processing time series data, making them well-suited for Earth surface dynamics modeling where temporal analysis is critical.

For example, in the field of hydrology, RNNs and LSTM networks have been employed to model and predict streamflow, groundwater levels, and precipitation, providing valuable insights for water resource management and flood forecasting4. In the domain of climate science, these models have been used to analyze and predict climate variables, such as temperature, precipitation, and sea-level pressure, contributing to a better understanding of climate dynamics and the development of more accurate climate projections5. In the area of geomorphology, RNNs and LSTM networks have been applied to model erosion processes, sediment transport, and landscape evolution, taking into account the temporal patterns of factors like precipitation, runoff, and vegetation cover. These examples demonstrate the potential of Recurrent Neural Networks and Long Short-Term Memory Networks in addressing a wide range of time series analysis tasks in Earth surface dynamics, enabling researchers to better understand and predict complex environmental processes that evolve over time.

1.     Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. URL

2.     Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211. DOI: 10.1207/s15516709cog1402_1

3.     Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. DOI: 10.1162/neco.1997.9.8.1735

4.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using long short-term memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018

5.     Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P., & Koumoutsakos, P. (2020). Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications, 11(1), 1-10. DOI: 10.1038/s41467-020-17817-1

 

c. Graph Neural Networks (GNNs): GNNs can effectively model complex relationships in irregularly structured data, such as geospatial networks or geological formations, making them valuable for Earth surface dynamics modeling1. GNNs are a class of deep learning architectures designed to process and analyze data that can be represented as graphs, which are structures composed of nodes connected by edges2. These models can effectively capture complex relationships and dependencies in irregularly structured data, making them particularly valuable for Earth surface dynamics modeling, where data is often spatially or temporally connected in non-grid formats.

For example, in the field of hydrology, GNNs have been employed to model river networks and simulate the flow of water and the transport of sediments, nutrients, or pollutants, providing valuable insights for water resource management and environmental monitoring3. In the domain of transportation and infrastructure, GNNs have been used to analyze and predict traffic flows, road conditions, and the impacts of extreme weather events on transportation networks, informing urban planning and emergency response strategies4. In the realm of geology and geophysics, GNNs have been applied to model the complex spatial relationships between geological formations, faults, and mineral deposits, enabling more accurate predictions of natural resources and contributing to the development of more sustainable extraction methods5. In the area of ecology, GNNs have been used to model ecological networks, such as food webs, species interaction networks, or habitat connectivity, supporting biodiversity conservation efforts and ecosystem management6. These examples illustrate the potential of Graph Neural Networks in addressing a wide range of Earth surface dynamics tasks involving irregularly structured data, offering significant opportunities for enhancing the understanding and prediction of complex environmental processes.

1.     Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., & Vandergheynst, P. (2017). Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4), 18-42. DOI: 10.1109/MSP.2017.2693418

2.     Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2009). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61-80. DOI: 10.1109/TNN.2008.2005605

3.     M. M. M. Pai, V. Mehrotra, S. Aiyar, U. Verma and R. M. Pai, "Automatic Segmentation of River and Land in SAR Images: A Deep Learning Approach," 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), Sardinia, Italy, 2019, pp. 15-20, doi: 10.1109/AIKE.2019.00011.

4.     Li, X., Zhang, C., Li, W., Ricci, R., Zanella, R., Newman, P., & Ratti, C. (2018). Traffic flow prediction with big data: A deep learning approach. IEEE Transactions on Intelligent Transportation Systems, 20(10), 3755-3767. DOI: 10.1109/TITS.2018.2889641

5.     Alves, R., Trifan, A., & Cetinic, E. (2023). Graph Neural Networks for Geoscientific Modeling: State of the Art and Future Directions. arXiv preprint arXiv:2302.01018. DOI: 10.48550/arXiv.2302.01018
 

  1. Transfer Learning:

Transfer learning is a powerful AI technique that enables modelers working on cutting-edge scientific developments in Earth surface dynamics to leverage pre-trained neural networks or models, which have already learned relevant features or patterns from similar tasks or datasets1. By adapting these pre-trained models to new, related tasks or problems, transfer learning can significantly reduce the time, effort, and data requirements for training new models while maintaining or even improving their performance.

In the context of Earth surface dynamics, transfer learning has been applied across various domains to tackle complex problems. For example, in remote sensing, researchers can use pre-trained CNNs, initially trained on large-scale image datasets like ImageNet, to classify or segment land cover types, by fine-tuning the model on a smaller, domain-specific dataset2. In climate science, transfer learning has been employed to adapt models pre-trained on global climate simulations to regional-scale climate predictions, allowing for more accurate and efficient downscaling of climate projections3. In hydrology, transfer learning can be used to adapt models trained on data-rich river basins to predict streamflow in data-scarce basins, overcoming the challenges of limited data availability4. In the field of geophysics, transfer learning can be applied to pre-trained models for seismic event detection, enabling researchers to quickly adapt the models to new regions or geological context5.

1.     Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. DOI: 10.1109/TKDE.2009.191 

2.     Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L., Xu, F., & Fraundorfer, F. (2017). Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine, 5(4), 8-36. DOI: 10.1109/MGRS.2017.2762307 

3.     Vandal, T., Kodra, E., & Ganguly, A. R. (2017). Intercomparison of machine learning methods for statistical downscaling: the case of daily and extreme precipitation. Theoretical and Applied Climatology, 128(1-2), 337-353. DOI: 10.1007/s00704-015-1705-2 

4.     Gupta, H. V., Sorooshian, S., & Yapo, P. O. (1998). Toward improved calibration of hydrologic models: Multiple and noncommensurable measures of information. Water Resources Research, 34(4), 751-763. DOI: 10.1029/97WR03495

5.     Perol, T., Gharbi, M., & Denolle, M. (2018). Convolutional neural network for earthquake detection and location. Science Advances, 4(2), e1700578. DOI: 10.1126/sciadv.1700578 

6.     Sun, Xian et al. “Research Progress on Few-Shot Learning for Remote Sensing Image Interpretation.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021): 2387-2402. DOI:10.1109/JSTARS.2021.3052869

 

a. Pretrained Models: Leveraging pretrained models can save time and computational resources by utilizing models that have already been trained on similar tasks or datasets. This approach can help overcome challenges related to limited training data in Earth surface dynamics modeling1. Pretrained models are AI models, particularly deep learning architectures, that have already undergone extensive training on similar tasks or large-scale datasets. By leveraging these models, researchers in Earth surface dynamics can save time, computational resources, and tackle challenges related to limited training data. Pretrained models often have already learned generalizable features or patterns that can be fine-tuned or adapted to specific tasks in Earth surface dynamics modeling, resulting in improved performance and reduced training time.

For example, in remote sensing applications, pretrained CNNs, originally trained on massive image datasets such as ImageNet, can be fine-tuned for land cover classification or change detection tasks, significantly reducing the need for large-scale labeled datasets specific to Earth surface dynamics2. In ecological modeling, pretrained models, such as those trained on species distribution or habitat suitability data, can be adapted to predict the distribution of new or related species, overcoming challenges related to data scarcity in certain regions or for specific species3. In climate science, pretrained models developed for global climate simulations can be fine-tuned for regional climate predictions, enabling researchers to generate more accurate and localized climate projections without starting the model training from scratch4.

1.     Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. DOI: 10.1109/TKDE.2009.191

2.     Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L., Xu, F., & Fraundorfer, F. (2017). Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine, 5(4), 8-36. DOI: 10.1109/MGRS.2017.2762307

3.     Fithian, W., Elith, J., Hastie, T., & Keith, D. A. (2015). Bias correction in species distribution models: pooling survey and collection data for multiple species. Methods in Ecology and Evolution, 6(4), 424-438. DOI: 10.1111/2041-210X.12242

4.     Vandal, T., Kodra, E., & Ganguly, A. R. (2017). Intercomparison of machine learning methods for statistical downscaling: the case of daily and extreme precipitation. Theoretical and Applied Climatology, 128(1-2), 337-353. DOI: 10.1007/s00704-015-1705-2
 

b. Domain Adaptation: Techniques for adapting AI models trained on one domain to perform well in another domain can help modelers apply AI models to new regions or conditions, enhancing their modeling capabilities1. Domain adaptation refers to a set of techniques used to adapt AI models, especially deep learning architectures, that have been trained on one domain or dataset to perform well on a related but different domain or dataset. By leveraging domain adaptation, modelers working in Earth surface dynamics can expand the applicability of their models to new regions, conditions, or scenarios, significantly enhancing their modeling capabilities and the generalizability of their models.

For example, in remote sensing applications, researchers can employ domain adaptation techniques to adapt models trained on satellite imagery from one region to perform well on imagery from another region with different land cover types or atmospheric conditions, enabling more accurate and consistent land cover classification or change detection across various regions2. In hydrological modeling, domain adaptation can be used to transfer knowledge from models trained on data-rich river basins to data-scarce basins, allowing for more accurate and reliable predictions of streamflow or water quality in previously underrepresented areas3. In climate science, researchers can apply domain adaptation to fine-tune global climate models to better represent regional-scale climate patterns and variations, leading to improved projections and more effective decision-making for climate adaptation and mitigation strategies4.

1.     Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. DOI: 10.1109/TKDE.2009.191

2.     Persello, C., & Bruzzone, L. (2017). Domain adaptation for the classification of remote sensing data: An overview of recent advances. IEEE Geoscience and Remote Sensing Magazine, 5(2), 9-29. DOI: 10.1109/MGRS.2017.2688178

3.     Kratzert, F., Klotz, D., Brenner, C., Schulz, K., & Herrnegger, M. (2018). Rainfall–runoff modelling using long short-term memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), 6005-6022. DOI: 10.5194/hess-22-6005-2018 

4.     Vandal, T., Kodra, E., & Ganguly, A. R. (2017). Intercomparison of machine learning methods for statistical downscaling: the case of daily and extreme precipitation. Theoretical and Applied Climatology, 128(1-2), 337-353. DOI: 10.1007/s00704-015-1705-2 

 

  1. Explainable AI (XAI):

a. Interpretability: As AI models become more complex, it is increasingly important for Earth surface dynamics modelers to understand the inner workings of these models. XAI techniques can help modelers gain insights into AI model decisions and predictions, building trust in their results and facilitating the integration of AI models with physics-based models1. As AI models, particularly deep learning models, grow in complexity, it becomes crucial for Earth surface dynamics modelers to understand the inner workings of these models. Explainable AI (XAI) techniques aim to increase transparency, allowing modelers to gain insights into the decisions and predictions made by AI models. By employing XAI methods, researchers can build trust in the results generated by AI models, justify their use in specific applications, and effectively integrate them with physics-based models to address complex Earth surface dynamics problems.

For example, in the field of landslide prediction, XAI techniques like Layer-wise Relevance Propagation (LRP) or Local Interpretable Model-agnostic Explanations (LIME) can help researchers understand the importance of various input features, such as slope, soil type, or vegetation, in predicting landslide susceptibility2. This understanding can guide data collection efforts and inform mitigation strategies. In the domain of atmospheric science, XAI can be employed to reveal the relationships between different atmospheric variables and extreme weather events, helping researchers better understand the drivers of these events and improve their predictive models3. In the area of coastal erosion modeling, XAI can help researchers interpret the relative importance of factors like wave action, sediment supply, and human interventions in determining erosion rates and patterns, enabling more targeted and effective coastal management strategies4.

1.     Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160. DOI: 10.1109/ACCESS.2018.2870052

2.     Pradhan, B., Lee, S., & Buchroithner, M. F. (2010). A GIS-based back-propagation neural network model and its cross-application and validation for landslide susceptibility analyses. Computers, Environment and Urban Systems, 34(3), 216-235. DOI: 10.1016/j.compenvurbsys.2010.03.001

3.     Toms, B. A., Barnes, E. A., & Ebert-Uphoff, I. (2020). Physically Interpretable Neural Networks for the Geosciences: Applications to Earth System Variability. Journal of Advances in Modeling Earth Systems, 12(9), e2020MS002097. DOI: 10.1029/2020MS002097

4.     Guevara, M., Guevara, M., Palmsten, M., & Sutherland, J. (2021). Explainable machine learning framework to identify and rank wave and sediment transport predictors of shoreline change. Coastal Engineering, 171, 103925. DOI: 10.1016/j.coastaleng.2021.103925

 

b. Feature Importance: Understanding the relative importance of input features in AI models can help modelers identify key drivers of Earth surface processes, informing future data collection and model development efforts1. Identifying the relative importance of input features in AI models is a critical aspect of explainable AI, as it helps modelers understand the key drivers of Earth surface processes and the underlying mechanisms at play. By uncovering the most influential features, researchers can focus on collecting high-quality data for these variables, prioritize areas for further study, and refine their models to better represent the relationships between the variables and the processes of interest.

For example, in hydrological modeling, understanding feature importance can reveal the primary factors influencing streamflow, such as precipitation, evapotranspiration, or land use changes2. This knowledge can help modelers optimize their data collection efforts, improve model calibration, and develop more accurate predictions of streamflow under various conditions. In the context of land cover change detection, feature importance analysis can highlight the significance of variables like vegetation indices, land use history, or topography3, guiding researchers in selecting the most relevant input data for their models and improving classification accuracy. In the study of glacier dynamics, understanding the importance of features like temperature, precipitation, or solar radiation can help researchers build more accurate and representative models of glacier mass balance, retreat, or advance, ultimately supporting better predictions of sea-level rise and other related impacts4.

1.     Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (pp. 4765-4774). Link to paper

2.     Demirel, M. C., Booij, M. J., & Hoekstra, A. Y. (2013). Effect of different uncertainty sources on the skill of 10 day ensemble low flow forecasts for two hydrological models. Water Resources Research, 49(8), 4035-4053. DOI: 10.1002/wrcr.20294

3.     Im, J., & Jensen, J. R. (2005). A change detection model based on neighborhood correlation image analysis and decision tree classification. Remote Sensing of Environment, 99(3), 326-340. DOI: 10.1016/j.rse.2005.09.008 

4.     Radić, V., & Hock, R. (2011). Regionally differentiated contribution of mountain glaciers and ice caps to future sea-level rise. Nature Geoscience, 4(2), 91-94. DOI: 10.1038/ngeo1052

 

  1. AI-Physics Integration:

a. Physics-Informed Neural Networks (PINNs): PINNs incorporate physical equations into neural network training, allowing AI models to respect physical laws and constraints1. This approach can help modelers combine the advantages of AI models with the reliability of physics-based models. Physics-Informed Neural Networks are a class of AI models that integrate physical equations, principles, and constraints into the neural network training process. By incorporating these physical relationships, PINNs can harness the benefits of AI models, such as flexibility and adaptability, while ensuring that the model predictions adhere to established physical laws. This approach provides a more robust and reliable modeling framework for Earth surface dynamics researchers, combining the strengths of AI and physics-based models.

For example, in groundwater modeling, researchers can use PINNs to incorporate governing equations like Darcy's Law and the continuity equation2, ensuring that the model's predictions are physically consistent while still benefiting from the flexibility of neural networks in representing complex relationships between input variables. In the field of atmospheric modeling, PINNs can be used to enforce conservation laws for mass, energy, and momentum4, allowing for more accurate predictions of weather and climate patterns that are consistent with fundamental physical principles.

1.     Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686-707. DOI: 10.1016/j.jcp.2018.10.045

2.     Karpatne, A., Watkins, W., Read, J., & Kumar, V. (2017). Physics-guided neural networks (PGNN): An application in lake temperature modeling. arXiv preprint arXiv:1710.11431. URL: https://arxiv.org/abs/1710.11431

3.     O'Gorman, P. A., & Dwyer, J. G. (2018). Using machine learning to parameterize moist convection: Potential for modeling of climate, climate change, and extreme events. Journal of Advances in Modeling Earth Systems, 10(10), 2548-2563. DOI: 10.1029/2018MS001351 

 

b. Hybrid Models: Combining AI models with traditional physics-based models can help modelers benefit from the strengths of both approaches, improving the accuracy and efficiency of their Earth surface dynamics models1. Hybrid models are an approach that combines AI models, such as neural networks, with traditional physics-based models to create more accurate and efficient Earth surface dynamics models. By integrating the data-driven capabilities of AI models with the underlying physical principles and constraints of physics-based models, hybrid models can take advantage of the strengths of both approaches, improving predictions and enhancing understanding of complex processes.

For example, in flood forecasting, a hybrid model might combine a physics-based hydrodynamic model with an AI-based rainfall prediction model, resulting in improved flood prediction accuracy by leveraging the strengths of both components2. In the field of wildfire modeling, researchers can combine a physics-based fire spread model with an AI model that predicts fuel moisture content or wind patterns3, ultimately enhancing the accuracy and efficiency of wildfire predictions and informing firefighting efforts. In the area of ocean modeling, a hybrid model can merge a physics-based ocean circulation model with an AI model that estimates sea surface temperature or salinity4, leading to more accurate predictions of ocean currents, eddies, and other phenomena relevant to Earth surface dynamics.

 

1.     Tapiador, F. J., Navarro, A., Moreno, R., Jiménez-Alcázar, A., Marcos, C., Tokay, A., ... & Petersen, W. A. (2019). On the optimal measuring area for point rainfall estimation: A dedicated experiment design. Journal of Hydrology, 572, 651-662. DOI: 10.1016/j.jhydrol.2019.03.039

2.     Alireza Arabameri, Amir Seyed Danesh, M. Santosh, Artemi Cerda, Subodh Chandra Pal, Omid Ghorbanzadeh, Paramita Roy, Indrajit Chowdhuri. (2022) Flood susceptibility mapping using meta-heuristic algorithmsGeomatics, Natural Hazards and Risk 13:1, pages 949-974. 

3.     Radke, D., Hessler, A., & Ellsworth, D. (2019, August). FireCast: Leveraging Deep Learning to Predict Wildfire Spread. In IJCAI (pp. 4575-4581). DOI: 10.1007/s11242-018-1170-7

4.     Kani, J. N., & Elsheikh, A. H. (2019). Reduced-order modeling of subsurface multi-phase flow models using deep residual recurrent neural networks. Transport in Porous Media126, 713-741. DOI: 10.1007/s11242-018-1170-7

Lodging and Archiving AI Models in the CSDMS Repository: Key Considerations and Differences 

Question 4:  

What's involved in lodging / archiving an AI model in the CSDMS collection / repository?

           Is it different than for 'classic' numerical/process models?

            (a) The Software, 

            (b) the training data, 

 (c) calibrations and QA measures,

            (d) the AI service setup - eg AWS SageMaker.

 

Answer 4:  

While I haven't personally lodged an AI model in the CSDMS (Community Surface Dynamics Modeling System) collection, I can offer some general pointers to make the process more accessible and valuable for the Earth surface dynamics research community. Keep in mind that lodging an AI model shares similarities with archiving classic numerical or process models, but there are unique aspects to consider.

(a) The Software: As with traditional models, provide the AI model's source code, dependencies, and documentation. Aim for clarity in explaining the model's purpose, structure, and usage. Emphasize good software development practices, such as modularity, comments, and version control, to make it easier for others to understand and maintain your model.

(b) The Training Data: Ensure the dataset used to train the AI model is available. This transparency allows researchers to understand the model's basis, reproduce results, and potentially refine the model using additional or alternative data. Sharing the data can spark new ideas and collaborations, driving innovation in the field.

(c) Calibrations and QA Measures: Clearly communicate your AI model's performance metrics, validation process, and hyperparameter tuning details. Providing this information demonstrates your model's accuracy and reliability, encouraging other researchers to confidently apply your model to various scenarios and advance Earth surface dynamics research.

(d) The AI Service Setup: Guide users through the necessary AI service setup, such as AWS SageMaker, which may be unfamiliar to researchers experienced with classic models. Offer detailed instructions on prerequisites, configurations, and deployment steps, ensuring a smooth experience for those eager to utilize your model.

Lodging an AI model in the CSDMS collection can be approached by focusing on comprehensive documentation, transparency, and guidance. By doing so, you'll contribute to the collaborative advancement of Earth surface dynamics modeling and foster interdisciplinary knowledge exchange, ultimately driving forward the frontiers of scientific discovery.

Lodging and archiving an AI model in the Community Surface Dynamics Modeling System (CSDMS) collection or repository involves several key steps and considerations. While some aspects are similar to archiving traditional numerical/process models, others may differ due to the unique nature of AI models. These considerations include:

  1. The Software:

a. Code Documentation: Both AI and classic models should have well-documented source code to facilitate understanding, reproduction, and modification by other researchers.

b. Dependencies and Libraries: AI models often rely on specific versions of AI frameworks (e.g., TensorFlow, PyTorch) and other libraries. It's essential to clearly list these dependencies to ensure compatibility and reproducibility. Consider containerization. 

c. Licensing: Choose an appropriate open-source license for the AI model, similar to traditional models, to determine how others can use, modify, and distribute the code.

 

  1. The Training Data:

a. Data Documentation: Thorough documentation of the training data, including descriptions of the variables, data sources, and any preprocessing steps, is crucial for AI models.

b. Data Licensing and Access: Make sure to comply with licensing requirements and provide access to the training data, if possible. For large datasets, consider providing a subset or a link to an external repository.

c. Data Formats: Standardize data formats and provide necessary metadata to ensure interoperability and ease of use.

 

  1. Calibrations and QA Measures:

a. Model Validation: Document the validation process and performance metrics for the AI model, including comparisons with traditional models, if applicable.

b. Uncertainty Quantification: Provide information on uncertainty quantification and any sensitivity analyses conducted for the AI model.

c. Reproducibility: Ensure that the model can be easily reproduced by other researchers, including detailed instructions on how to train and test the AI model.

 

  1. The AI Service Setup (e.g., AWS SageMaker):

a. Deployment Instructions: Provide clear instructions on how to deploy the AI model using cloud-based services like AWS SageMaker, if applicable.

b. Cost Considerations: Be transparent about the costs involved in using cloud-based services for training and deploying the AI model. Commercial Cloud solutions like SageMaker are cost effective for small companies, and small teams who want to do some ML, but cost savings shrink and eventually become excessive for larger groups. https://dl.acm.org/doi/abs/10.1145/3311790.3396642

c. Access and Authentication: Outline the steps for obtaining necessary API keys or authentication credentials for accessing the AI model via cloud services.

 

  1. NSF funded public resources: CyVerse provides three options relevant to ML, as discussed in this article: https://link.springer.com/article/10.1007/s11192-022-04539-8

a. SaaS (Software as a Service) = Discovery Environment: This option includes managed Kubernetes and HTCondor clusters for launching Jupyter Notebooks with ML tooling installed, or for running jobs on a High-Performance Computing (HPC) system for training ML models, or High-Throughput Computing (HTC) for inference. The Discovery Environment is part of a CyVerse subscription.

b. IaC (Infrastructure as Code) = CACAO: This option allows you to run and manage your own clusters on a commercial cloud platform. With CACAO, you can manage your custom Amazon SageMaker instances in AWS or launch a similar ML cluster on Google Cloud, Microsoft Azure, or OpenStack. This service is also part of a CyVerse subscription. 

c. Run your own CyVerse: CyVerse is open source, and you can find the documentation at https://docs.cyverse.org. By choosing this option, you can set up and manage your CyVerse instance independently. While there are no costs associated with CyVerse itself in this case, running your instance on a large computer cluster will still incur its own costs.

Lodging and archiving an AI model in the CSDMS repository involves several essential steps that are both similar to and different from archiving traditional numerical/process models. By addressing these key considerations, researchers can ensure that their AI models are accessible, reproducible, and useful to the broader Earth surface dynamics modeling community.

GPT-4 

Large Language Models (LLMs), such as GPT-4, have been primarily used for natural language processing tasks, but their potential applications extend to various fields, including Earth data science and Earth surface science. LLMs can be adapted to handle non-textual data and used for tasks like data analysis, feature extraction, and predictions in Earth sciences.

One possible application of LLMs in Earth data science is generating natural language descriptions of complex Earth data patterns or trends. By training LLMs on Earth data, they can generate textual explanations or summaries of the data, making it more accessible and understandable to a wider audience.

LLMs can also handle time-series data, making them suitable for Earth surface science applications involving sequential data, such as weather forecasting, flood prediction, or earthquake monitoring. Fine-tuning LLMs on time-series Earth data allows them to identify important patterns and correlations, ultimately providing more accurate predictions or simulations.

Additionally, transfer learning can be applied to LLMs in the context of Earth data science. Pre-trained LLMs can be fine-tuned using Earth science-specific datasets, enabling the models to learn domain-specific knowledge and provide more accurate predictions, classifications, or regression results.

Some ways in which transformer models, the basis of LLMs like GPT-4, could be transferable to Earth data science and Earth surface science fields include:

  1. Literature review and information retrieval: LLMs can search through vast amounts of scientific literature, identifying relevant articles and summarizing their key findings.
  2. Data analysis and pattern recognition: LLMs can be adapted to work with numerical data and recognize patterns in Earth surface dynamics data.
  3. Model development and optimization: LLMs can aid researchers in developing and optimizing deep learning architectures for better modeling and understanding of Earth surface processes.
  4. Natural language processing for Earth surface dynamics: LLMs can be used to process and analyze textual data related to Earth surface dynamics.
  5. Interdisciplinary collaboration: LLMs can foster interdisciplinary collaboration among researchers by providing insights from various disciplines.
  6. Science communication and public engagement: LLMs can generate accessible, engaging summaries and explanations of complex scientific concepts, making it easier to communicate findings to non-experts, policymakers, and the general public.

 

Transformer models, introduced by Vaswani et al. (2017), represent a significant breakthrough in NLP and deep learning. They have been highly scalable, leading to large-scale pre-trained models like BERT, GPT-3, and RoBERTa, which have achieved state-of-the-art performance across a wide range of NLP tasks. In environmental science, transformer models hold significant potential for various applications, such as analyzing text-based data, identifying trends, and facilitating the extraction and organization of environmental data from diverse sources. As research on transformer models continues to advance, their applications in environmental science are expected to expand, offering novel insights and tools for addressing complex environmental challenges.

 

References:

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.
  2. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  3. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9.
  4. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
  5. Bing, L., Miao, Z., Li, Y., & Wu, F. (2021). Transformer model for text classification in environmental science. IEEE Access, 9, 55883-55892.
  6. Qi, J., Pan, X., Zhang, X., Liu, J., & Li, X. (2021). Environmental Named Entity Recognition Based on BERT. ISPRS International Journal of Geo-Information, 10(4), 230.
  7. Cheng, X., Lu, X., She, B., & Cheng, L. (2021). Remote Sensing Image Captioning Based on Multi-Head Self-Attention Mechanism. Remote Sensing, 13(1), 49.

 Parable of the Chess grandmaster

In the late 20th century, a growing community of chess enthusiasts and professionals dedicated their lives to mastering the strategic complexities of the game. They honed their skills through friendly matches, tournaments, and studying the moves of grandmasters, striving to outsmart their opponents and climb the ladder of competitive chess.

In the mid-1990s, a team of computer engineers from IBM developed a chess-playing AI program called Deep Blue. The machine was designed to challenge even the most skilled chess players, leveraging its computational power to analyze countless games and positions. Intrigued by the challenge, the reigning world chess champion Garry Kasparov agreed to face Deep Blue in a series of matches.

Although Kasparov won the first match in 1996, Deep Blue defeated him in their rematch in 1997, marking the first time a computer had beaten a reigning world champion in a match under standard chess tournament time controls. This event sent shockwaves through the chess community, as many players worried that the emergence of such a powerful AI would render their years of practice and dedication to the game obsolete.

However, as time went on, they began to notice something remarkable. The AI chess programs, such as Deep Blue and its successors, were not only formidable opponents but also powerful teachers. By analyzing and learning from the AI's innovative strategies and tactics, chess players began to improve their skills at an accelerated pace.

This technological revolution led to a paradigm shift in chess education and training. Players started incorporating AI-based analysis tools into their practice routines, using them to identify weaknesses in their games and explore new strategic possibilities. The collaboration between humans and AI programs ultimately enriched the game of chess, raising the level of play to new heights and paving the way for future generations of chess players.

This historical story can serve as an analogy for the potential impact of large language models (LLMs) on writing and other intellectual pursuits. Just as chess players improved their skills by learning from AI, writers and other professionals may benefit from the insights and knowledge provided by advanced AI models, ultimately elevating the collective human intellect and fostering new discoveries across various domains.

 

Acknowledgement:

 We would like to acknowledge the valuable assistance of the GPT-4 language model by OpenAI, which was utilized as a production tool in the creation of this document1. This research has been generously supported by the National Science Foundation (NSF). We express our gratitude for the resources provided and the opportunity to advance our understanding of Earth surface dynamics through the integration of artificial intelligence techniques.

  1. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Agarwal, S. (2020). Language models are few-shot learners. Nature, 585(7824), 58-66. DOI: 10.1038/s41586-020-03034-6