Data analysis techniques for the coastal zone

From Coastal Wiki
Revision as of 12:14, 13 February 2024 by Dronkers J (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Here we introduce a series of Coastal Wiki articles dealing with data analysis techniques. The aim of data analysis methods is generally to find a small number of functions that resolve with sufficient accuracy the spatial and temporal properties of the data in terms of external forcing factors. The data analysis techniques presented in the Coastal Wiki are:

  • Linear regression
  • Principal component analysis, empirical orthogonal functions and singular spectrum analysis
  • Wavelets
  • Artificial neural networks
  • Kriging
  • Random Forest Regression
  • Support Vector Regression

Each technique has advantages and disadvantages. The most suitable technique depends on the problem at hand and on the quantity and quality of the available data. In the table below we provide some guidance for choosing the most appropriate technique for the analysis of data on coastal processes.


Table 1. Comparison of data analysis techniques
Analysis technique Strengths Limitations Application example
Linear regression analysis * Trend detection (linear, nonlinear) from data records
* Robust, cheap, easy to implement
* Data errors must be uncorrelated and Gaussian distributed
* Error margins of interpolations and extrapolations are underestimated
* Trend functions are arbitrarily chosen
Trend analysis
Principal component analysis, empirical orthogonal functions and singular spectrum analysis * Techniques are basically the same
* Can handle large data sets
* Identification of 'hidden' spatial (1D, 2D) or temporal patterns
* Guides interpretation towards underlying processes
* Enables data reduction and noise removal
* Bias towards variables with high variance
* Less suited than wavelets in case of phase-shifted patterns
Identification of patterns in large datasets
Wavelets * Analysis of irregular, non-cyclic and nonlinear processes
* Can handle large data sets
* Enables data reduction and noise removal
* Guides interpretation towards underlying processes
* Requires equidistant data
* Not suited for small data records
* Less performant than Fourier or harmonic analysis in case of regular cyclic processes
Analysis of phenomena with strong spatial and temporal variation
Artificial Neural Networks * Prediction tool based on machine learning from training data
* Can handle complex nonlinear systems
* Identification of major influencing factors
* Predictions only within the range of the trained situations
* Black box prediction tool
* Requires large datasets
* No general prescription for optimal network design
* Possibly unreliable results due to overfitting
* No guarantee for convergence to optimal solution
Prediction of features driven by multiple external factors
Kriging * Optimal interpolation method if errors in the data are spatially or temporally correlated
* Provides uncertainty estimate
* Can handle non-uniform sampling
* Assumption that correlations of data deviations from the interpolated function decrease with distance
* Data records must be either in space or time domain
Data records with variability at a wide range of scales
Support Vector Regression * Prediction tool based on machine learning from training data
* Handles unstructured data and nonlinear relationships in high dimensional spaces
* Does classification and regression
* Robust method based on sound mathematical principles
* Efficient for small datasets
* Overfitting can be easily avoided
* Black box, no easy interpretation of results, no probability estimates
* Sensitivity to noise and outliers
* Less efficient for large datasets
* Not reliable outside the range of trained situations
* Results influenced by the choice of the kernel transformation
Pattern recognition from images, e.g. interpretation remote sensing images
Random Forest Regression * Prediction tool based on machine learning from training data
* Handles nonlinear relationships
* Does classification and regression
* Resilient to data noise and data gaps
* Computationally efficient
* Low overfitting risk
* Black box, no easy interpretation of results, no probability estimates
* Less efficient if many trees
* Not reliable outside the range of trained situations transformation
Time series forecasting, pattern recognition from images, e.g. interpretation remote sensing images


Related articles

Linear regression analysis of coastal processes
Analysis of coastal processes with Empirical Orthogonal Functions
Wavelet analysis of coastal processes
Artificial Neural Networks and coastal applications
Data interpolation with Kriging
Random Forest Regression
Support Vector Regression


References


The main authors of this article are Job Dronkers, Grzegorz, Rozynski, Vanessa, Magar and James, Sutherland
Please note that others may also have edited the contents of this article.