Abstracts
Back Home Next

Home
Registration
Pictures
Short Course
Schedule
Abstracts
Contacts
Deadlines
GolfTournament
Transportation
Maps & Websites

Updated June 10th, 2004

Keynote Address

Dynamic Multi-Resolution Spatial Models  
Noel Cressie
 
Director, Program in Spatial Statistics and Environmental Sciences The Ohio State University, Columbus, OH
 

The problem of spatial-temporal prediction of global processes, using a model that recognizes multiple resolutions in the spatial domain, is considered. Here, optimal spatial-prediction procedures can be shown to be extremely fast. Similar ideas can be used in the spatial-temporal domain; a vector autoregressive model is assumed at the coarsest resolution and, at each time-point, a multi-resolution spatial structure is modeled. Then the idea is to use Bayesian updating to make the prior distribution of the coarse-resolution process more informative as time proceeds. Our spatial-temporal methodology will be compared to the spatial-only methodology on data from the Total Ozone Mapping Spectrometer (TOMS) instrument on the Nimbus-7 satellite. The material presented in this talk is the result of joint research with Gardar Johannesson (Lawrence Livermore National Labs) and Hsin-Cheng Huang (Academia Sinica).  

Invited Talks

FFT Regression and Cross-Noise Reduction for Comparing Images in Remote Sensing

Gerald L. Anderson1 and Kalman Peleg2  
1
United States Department of Agriculture, Agricultural Research Service Sidney, MT 2Agricultural Engineering Department, Technion Israel Inst. of Technology, Haifa 32000 Israel.
 

In many remote sensing studies it is desired to quantify the functional relationship between images of a given target that were acquired by different sensors. Such comparisons are problematic because when the pixel values of one image are plotted versus the other, the "cross-noise" is quite high. Typically, the correlation coefficient is quite low, even when the compared images look very much alike. Nevertheless, we can try to quantify the functional relationship between two images by a suitable regression model function Y=f(X), while choosing one of them as "the reference" Y and using the other one as a "predictor" X. The underlying assumption of classical regression is that Y is absolutely correct while X is erroneous. Thus, the objective is to fit X to Y by choosing the parameters of Y=f(X), which minimize the "residuals" (Y - Y). When comparing images in remote sensing this objective is not valid because Y itself is error prone. The alternative, FFT regression method presented herein comprises a two-stage sensor fusion approach, whereby the initially low correlation between X and Y is increased and the residuals are drastically decreased. First, pairwise image transforms are applied to X and Y whereby the correlation coefficient is increased, e.g. form roughly 0.4 to about 0.8 - 0.85. A predicted image Yfft is then derived by least squares minimization between the amplitude matrices of X and Y, via the 2D FFT. In the second stage, there are two options: For one time predictions, the phase matrix of Y is combined with the amplitude matrix of Yfft, whereby an improved predicted image Yplock is formed. Usually, the residuals of Yplock versus Y are about half of the values of Yfft versus Y. For long term predictions, the phase matrix of a "field mask" is combined with the amplitude matrices of the reference image Y and the predicted image Yfft. The field mask is a binary image of a pre-selected region of interest in X and Y. The resultant images Ypref and Ypred are modified versions of Y and Yfft respectively. The residuals of Ypred versus Yprefm are even lower than the residuals of Yplock versus Y. Images Ypref and Ypred represent a close consensus of two independent imaging methods which view the same target. The practical utility of FFT regression is demonstrated by examples wherein remotely sensed NDVI images X are used for predicting yield distributions in agricultural fields. Reference yield maps Y, were derived by combine yield monitors which measure the flow rate of the crop, while it is being harvested. The 2D FFT transforms, as well as all other mathematical operations in this paper were performed in the "MATLAB" environment. 

Bayesian Wombling: Estimating Spatial Gradients  
Sudipto Banerjee
 
Division of Biostatistics, University of Minnesota 

Spatial process models are now widely used for inference in many areas of application. In such contexts interest is often in the rate of change of a spatial surface at a given location in a given direction. Examples include temperature or rainfall gradients in meteorology, pollution gradients for environmental data, and surface roughness assessment for digital elevation models. Because the spatial surface is viewed as a random realization, all such rates of change are random as well. This talk presents the notions of directional derivative processes building upon the concept of mean square differentiability. We discuss distribution theory results under the assumptions of a stationary Gaussian process model either for the data or for spatial random effects. We present statistical inference under a Bayesian framework which, in this setting, presents several advantages using a simulated dataset and also with a real estate dataset consisting of selling prices of individual homes. 


Spatial Scaling of Extremes in Climate Models

Dan Cooley1, Philippe Naveau2 and Paul Poncet3
1National Center for Atmospheric Research, Geophysical Statistics Project and CU-Boulder Applied Mathematics Department
2CU-Boulder Applied Mathematics Department and Laboratoire des Sciences du Climat et de l'Environnement, IPSL-CNRS, Gif-sur-Yvette, France
3Ecole Nationale Superieure des Mines de Paris, France

Climate data is recorded at many different scales; global climate models yield data for a grid cell while weather stations record data at point locations. The extremes of climate data are of interest as they have significant impacts, but little work has been done relating the extreme values of the different scales. We propose a one-parameter model which relates the annual maximum at a point location to the annual maximum on the grid cell, after both have been rescaled to have standard Frechet marginals. Our model preserves the desired property of max-stability, and is flexible enough to accommodate spatial structure. The parameter of the model can be understood intuitively and can be shown to be related to the extremal index of the random variables. Finally, we propose an estimate to the extremal index (and thus the model's parameter) which is based on the madrogram.


Spatial Hierarchical Bayes Model for AOGCM Climate Projections
 
Reinhard Furrer1
, Stephan Sain2, Tom Wigley1, and Doug Nychka3
1University Corporation for Atmospheric Research, 2University of Colorado - Denver, and 3National Center for Atmospheric Research

 Numerical experiments based on atmospheric-ocean general circulation models (AOGCMs) are one of the primary tools in deriving projections for future climate change. However, each model has its strengths and weaknesses within local and global scales. This motivates climate projections synthesized from results of several AOGCMs' output weighted according to model bias and convergence. We combine present day observations, present day and future climate projections in a single hierarchical Bayes model. The challenging aspect is the modeling of a meaningful covariance structure of the spatial processes. We propose several approaches thereof. The posterior distributions (in this case the individual model bias) are obtained with computer-intensive MCMC simulations. The novelty of our approach is that we use gridded, high-resolution data within a spatial framework. The primary data source is provided by the MAGICC/SCENGEN program (Wigley, T.M.L., 2003) and consists of 17 AOGCMs on a 5 by 5 degree grid under several different emission scenarios. We consider variables such as the precipitation, temperature, and min/max thereof. Extensions such as a multivariate approach and heavy tailed error distributions are discussed. 

Spatial Cluster Detection Using Bayes Factors from Overparameterized Models  
Ronald Gangnon
 
Department of Biostatistics and Medical Informatics, University of Wisconsin - Madison 

We consider a partition model for estimation of regional disease rates and for detection of spatial clusters. Formal inference regarding the number of partitions (or clusters) can be obtained using a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. As an alternative, we consider models with a fixed, but overly large, number of partitions. We explore the ability of these models to provide informal inferences about the number and locations of clusters using localized Bayes factors. We illustrate these two approaches using the well-known New York leukemia data and data on breast cancer incidence in Wisconsin. 

Spatial Models for the Distribution of Extremes
Eric Gilleland, Doug Nychka and Uli Schneider
Geophysical Statistics Project, National Center for Atmospheric Research, Boulder, CO

Although many statistical methods focus on representing the mean tendencies of a process or population, often the connection to a scientific context is best served by considering extreme values of a distribution. This talk will motivate statistics for extremes using an example based on high values of ozone pollution. The ozone pollution case study will contrast extremes derived from a space-time model with an alternative approach using spatial models for the tail of the distributions. Thus, one benefit from this case study is contrasting complex hierarchical models developed from different perspectives.
Chain Graphs for Spatial Dependence in Ecological Data  
Alix Gitelman  
Department of Statistics, Oregon State University, Corvallis, OR 

Graphical models (alternatively, Bayesian belief networks, path analysis models) are increasingly used for modeling complex ecological systems (Varis & Kuikka 1997; Lee 2000; Borsuk, Stow & Reckhow 2003). Their implementation in this context leverages their utility in modeling interrelationships in multivariate systems, and in a Bayesian implementation, their intuitive appeal of yielding easily interpretable posterior probability estimates. Methods for incorporating correlational structure to account for observations collected through time and/or space---features of most ecological data---have not been widely studied, however (Haas 1992 is one exception). In this talk, an ``isomorphic'' chain graph (ICG) model is introduced to account for correlation between samples by linking site- (time-) specific Bayes network models. Several results show that the ICG preserves many of the Markov properties (conditional and marginal dependencies) of the site- (time-) specific models. The ICG model is compared with a model that does not account for spatial correlation using data from several stream networks in the Willamette Valley, Oregon. 

Modeling Spatial-Temporal Binary Data Using Markov Random Fields  
Hsin-Cheng Huang1
, Jun Zhu2, and Jungpin Wu3 
1
Institute of Statistical Science, Academia Sinica, Taiwan, 2Department of Statistics, University of Wisconsin--Madison 3Department of Statistics, Feng Chia University, Taiwan
 

An autologistic regression model consists of a logistic regression of a response variable on explanatory variables and an auto-regression on responses at neighboring locations on a lattice. It is a Markov random field with pairwise spatial dependence and is a popular tool for modeling spatial binary responses. In this article, we add a temporal component to the autologistic model for spatial-temporal binary data. The spatial-temporal autologistic model captures both spatial dependence and temporal dependence simultaneously by a space-time Markov random field. We estimate the model parameters by maximum pseudo-likelihood and obtain optimal prediction of future responses on the lattice by a Gibbs sampler. For illustration, the method is applied to study the outbreaks of southern pine beetle in North Carolina. We also discuss the generality of our approach for modeling other types of spatial-temporal lattice data. 

Centering the Effects of Neighbors in Markov Random Field Models  
Mark S. Kaiser,
Petruta Caragea and Kyoji Furukawa
 
Department of Statistics, Iowa State University, Ames, IA 

Models for Markov random fields may be specified on the basis of any number of conditional distributions, but models having Gaussian conditionals are by far the most common in applications. One of the reasons for this is that a Gaussian conditionals model is among the few for which the corresponding joint distribution can be derived in closed form. But models with Gaussian conditionals also possess other characteristics that make them useful. Among these is the expression of conditional expectations in a form that contains a sum of neighboring effects, each of which is taken as a discrepancy of the value from its own marginal mean. A primary consequence of this is that the sum of neighboring effects does not necessarily change as the number of neighbors varies. This is in contrast with general exponential family conditional distributions, which are usually parameterized with sums of "un-centered" neighboring effects contributing to the natural parameter. Locations with many neighbors can have natural parameters that are driven to extreme values, or for which this possibility must be offset by small dependence parameters. As a result, models are often difficult to fit and interpretation of parameters is clouded. We demonstrate that parameterizations exist for exponential family conditional distributions that can allow approximately the same type of "centering" of neighboring values as is used in the Gaussian case. This yields greater interpretability for parameters, greater stability in estimation across models with varying neighborhood size, and can help alleviate edge effects. 

Multi-resolution (Wavelet) Based Non-stationary Covariance Modeling for Incomplete Data: the EM Algorithm  
Tomoko Matsuo
and Douglas W. Nychka
 
Geophysical Statistics Project, National Center for Atmospheric Research 

Observational data encountered in most of the geophysical application of spatial statistics often consists of a large volume of spatially and temporally incomplete measurements. Furthermore, geophysical spatial processes often exhibit highly non-stationariness, and it is important that non-stationary stochastic properties are well represented by modeled covariance functions. Wavelets are versatile multi-resolution bases to characterize the stochastic features of a non-stationary spatial field. In this work we augment a method of multi-resolution (wavelet) based non-stationary covariance modeling to handle the irregularly distributed observational data. Application of the Expectation Maximization (EM) algorithm for estimation of wavelet-based covariance model parameters is used and takes advantage of the efficiency of the discrete wavelet transform. 

A Case Study of the Implications of Atmospheric Data Pre-processing Using "Correction Factors"  
Wendy Meiring
 
University of California, Santa Barbara 

Atmospheric processes frequently are measured by several instruments, corresponding to a variety of space-time resolutions and sampling schemes. Types of measurement instruments include surface-based, satellite, balloon-based, and airplane-based instruments. Each instrument exhibits its own measurement error processes, with biases that may depend on atmospheric conditions. Atmospheric observations frequently are pre-processed prior to further analysis. For some atmospheric measurements, this pre-processing may involve "correcting" the data from certain instruments to improve agreement with observations from other measuring instruments. We use functional data analysis methods (in the context of a stratospheric ozone case study based on balloon-based ozonesonde data), to illustrate some of the resulting statistical challenges. We discuss the vital role that analyses of "correction factors" may play in studying data quality, specifically through this case study of balloon-based ozonesonde data which have been "corrected" based on other instruments. Data pre-processing of this type also has implications for attempts to combine data from different measurement instruments. 

Two-Phase Sampling Approach for Augmenting Fixed Grid Designs to Improve Local Estimation for Mapping Aquatic Resources  
Kerry J. Ritter1
, Molly Leecaster2, and N. Scott Urquhart3 

1
Southern California Coastal Water Research Project, Westminster, CA 2Idaho National Engineering & Environmental Laboratory, Idaho Falls, ID 3Department of Statistics, Colorado State University, Fort Collins, CO 

Maps are useful tools for understanding, managing and protecting our marine environment. Despite the benefits, there has been little success in developing useful and statistically defensible maps of environmental quality and aquatic resources in the coastal regions. Heterogeneous oceanic conditions often make extrapolation to non-sampled locations questionable. Kriging is a commonly used statistical approach that uses information observed at sampled locations to improve predictions at non-sampled locations. The precision and accuracy of those predictions rely entirely on our ability to capture the spatial variability of the response. Knowing how many samples to collect and how far apart sampling points should be spaced are crucial to our ability to model the spatial variability or variogram accurately and hence improve the accuracy of our predictions. We investigate several design strategies for modeling the variogram, where the goal is to provide general guidelines for coastal water monitoring agencies and dischargers to map aquatic resources and contaminants. We also discuss a two-phase sampling approach currently being developed for the San Diego Sanitation District for the purpose of mapping chemical contaminants around their sewage outfall. 

Practical Issues and Tools for Modeling Spatio-temporal Trends in Atmospheric Pollutant Monitoring Data  
Paul Sampson  
University of Washington, Seattle, WA

There is a substantial literature on methods for modeling trends in space-time monitoring data of atmospheric pollutants. The choice of approach will reasonably depend on the spatio-temporal scales of the monitoring data as well as the scientific aims of the analysis, such as testing of long-term regional trends, computing various metrics of long-term exposure in chronic health effect models, or utilizing spatio-temporal tends in the computation of spatial estimates of exposure, for both long-term and acute health effects analyses. Any such modeling and analysis must consider the common issues of temporal and spatial correlation. Here I focus on the modeling of spatially varying seasonal structure for purposes of spatial estimation. I introduce a simple but flexible approach to modeling spatio-temporally varying seasonality and long-term trend in terms of basis functions derived from a singular value decomposition of the space x time data matrix. Demonstrations are provided for analyses of date from ozone and particular matter monitoring networks.

 

 Characterization of Spatial Variability in Soil and Crop Properties in Agricultural Fields Using Spatial Statistical Methods  
John F. Shanahan
and James S. Schepers
 
USDA-ARS, Univ. of Nebraska, Lincoln, NE 

Currently, most crop production inputs like irrigation, fertilizers, and pesticides are applied at uniform rates across agricultural landscapes. However, because of inherent spatial variability in most landscapes, not all field areas require the same level of inputs, resulting in either under- or over application. Hence, crop yields and economic returns may be limited in some areas due to suboptimal input levels, while environmental contamination may occur in over application areas, especially for nitrogen fertilizer. The goal of precision agriculture is to evaluate existing and develop new geospatial technologies that may provide a means of varying input application rates based on spatial variation present in landscapes. One recent approach in precision agriculture has focused on use of management zones (MZ) as a means to characterize landscape variability and provide a basis for more efficient input application. Management zones are defined as field areas possessing homogenous soil conditions, resulting in similar crop yield potential, input-use efficiency, and environmental impact. The objectives of our work were to determine: 1) if landscape attributes of topography, soil color, and apparent electrical conductivity (ECa) could be used to delineate MZ that characterize spatial variation in soil chemical properties as well as corn yields, and 2) if temporal variability affects expression of yield spatial variability. The work was conducted on an irrigated cornfield near Gibbon, NE. Landscape attributes, including a soil color aerial image (red, green, and blue bands), elevation, and ECa, were acquired for the field. A georeferenced soil-sampling scheme was used to determine soil chemical properties (soil pH, EC, P, and organic matter). Georeferenced yield monitor data were collected for five growing seasons. The five landscape attributes were aggregated into four MZ using principal component analysis (PCA) and unsupervised classification of PC scores. Unsupervised classification of PC scores produced four well-defined MZ for the field. All the soil chemical properties differed among the four MZ. Spatial patterns for yield and MZ, as determined by semivariogram analysis, were similar in three of five seasons, receiving average precipitation; however, the patterns were less similar in wet and dry seasons. These results illustrate the significant role temporal variability plays in altering crop yield spatial variability. Implication of these findings for developing precision agriculture technologies for corn productions systems will be discussed.

Comparison of Design-Based and Model-Based Techniques for Selecting Spatially Balanced Samples of Environmental Resources  
Don L. Stevens, Jr.
 
Department of Statistics, Oregon State University, Corvallis, OR 

It is widely recognized that an efficient sample of a spatially distributed resource will have some degree of regularity. For example, locating sample points at the nodes of a regular grid is an optimal model-based design for some semivariograms and domain shapes. Locating point becomes more complicated if the domain has an irregular shape or if the design incorporates existing sample points. In this talk, I review some model-based techniques, such as simulated spatial annealing, for incorporating prior knowledge in locating new sample points. These techniques are contrasted with design-based techniques, such as generalized random tessellation stratification, that can also incorporate prior knowledge and existing sample points. The inferences that result from different approaches are also discussed. 

Some New Spatial Statistical Models for Stream Networks  
Jay M. Ver Hoef1,
Erin Poston2, and David M. Theobald3
 
1
Alaska Fish and Game, Fairbanks, AK 2Department of Geosciences, Colorado State University, Fort  Collins, CO 3Natural Resources Ecology Laboratory, Colorado State University, Fort Collins, CO 

Models for spatial autocorrelation depend on the distance and direction separating two locations, and are constrained so that for all possible sets of locations, the covariance matrices implied from the models remain nonnegative definite. Although there are extensive sets of families of models for two-dimensional space, few models have been developed for stream networks. The only known model that is valid for stream networks is an exponential model, and it is based on stream distance. Even this model may not be appropriate when considering flow characteristics of streams. Recent research has shown that moving-average functions, also known as kernel convolutions, may be used to generate a large class of valid, flexible models in two dimensions. This paper develops moving average models for stream networks. The moving average models are easily applied to stream network situations; two general classes of models are those based on stream distance, and those that incorporate flow. An interesting property of flow models is that they have discontinuities at stream junctions that are not present for stream distance models. Flow models are more appropriate when considering variables such as stream chemistry, while distance models may be more appropriate for variables such as fish abundance. We give examples, including a flow model based on stream chemistry variables from northern Alaska.

Exploring Spatio-temporal Patterns in Lyme Disease Incidence and Reporting, 1992-2000
Lance A. Waller
Department of Biostatistics, Rollins School of Public Health, Emory University, Atlanta, GA

The observed spatial pattern of vector-borne disease incidence is a function of spatial patterns of host populations, vector populations, host-vector contact, risk of transmission, and disease reporting.  For an emerging infection, the pattern of reporting may not be homogeneous across space and time as physicians learn to diagnose and report the illness.  Using reported county-level Lyme disease incidence from 1992-2000 in the north-eastern United States, we use exploratory methods to examine observed patterns in reports, and patterns in "no reports" in order to investigate evidence of evolving reporting practices.  The observed patterns suggest future directions for the analysis and monitoring of reports of emerging infectious diseases, and, more generally for the development of infectious disease risk maps. 

 
Contributed Talks

Bayesian Inferences on Environmental Exceedances and Their Spatial Locations  
Peter F. Craigmile,
Noel Cressie, Thomas J. Santner, and Youlan Rao
 
The Ohio State University, Columbus, OH 

A frequent problem in environmental science is the prediction of extrema and exceedances. It is well known that Bayesian and empirical-Bayesian predictors based on integrated squared error loss tend to "overshrink" predictions of extrema toward the mean. In this talk, we propose a new loss function called the integrated weighted quantile squared error loss (IWQSEL) as the basis for prediction of exceedances and their spatial location. The loss function is based on an ordering of the underlying spatial process using a spatially averaged cumulative distribution function. We illustrate this methodology with a Bayesian analysis of surface-nitrogen concentrations in the Chesapeake Bay. 

Test for Interaction between Marks and Points of a Marked Point Process
Yonftao Guan
 
University of Miami, Coral Gables, FL 

Irregularly spaced spatial data are often modeled by marked point processes. A common assumption while modeling such phenomena is that the observations (i.e., marks) and observation locations (i.e., points) are independent. This, however, is a questionable assumption in many applications. In this paper, I present several new approaches which are useful to investigate interaction among the marks and points of a marked point process. Applications of these methods are illustrated through both simulations and real data analysis. 

Modeling Transport Effects on Ground-Level Ozone Using a Non-Stationary Space-Time Model  
Nan-Jung Hsu
 
Institute of Statistics, National Tsing-Hua University (Taiwan) 

This article presents a novel autoregressive space-time model for ground-level ozone data, which models not only spatio-temporal dynamics of hourly ozone concentrations, but also relationships between ozone concentrations and meteorological variables. The proposed model has a nonseparable spatio-temporal covariance function that depends on wind speed and wind direction, and hence is nonstationary in both time and space. Ozone concentration for a given location and time is assumed to be directly influenced by ozone concentrations at neighboring locations at the previous time, via a weight function of space-time dynamics caused by wind speed and wind direction. To our knowledge, the proposed method is the first one to incorporate transport effect of ozone into the spatio-temporal covariance structure. Moreover, it uses a computationally efficient space-time Kalman filter and can compute optimal spatio-temporal prediction at any location and time very fast for given meteorological conditions. Ozone data from Taipei are used for illustration, in which the model parameters are estimated by maximum likelihood. 

Generating Spatially Correlated Count Data  
Lisa Madsen
and Dan Dalthorp
 
Department of Statistics, Oregon State University 

Spatially correlated count data are frequently encountered in ecological applications, and new methods for analyzing such data are rapidly being developed. To help evaluate the performance of statistical methodologies for correlated count data, we have developed a new technique for simulating non-negative integer-valued random variables with specified mean, variance, and correlation structure. Our technique is derived from an observation by Holgate (1964) that correlated Poisson random variables Y1 and Y2 can be written as Y1 = X + X1 and Y2 = X + X2, where X, X1, and X2 are independent, but Y1 and Y2 are correlated because they share a common component X. Park and Shin (1998) generalized Holgate's approach to vectors of n correlated random variables that belong to infinitely divisible distributions. Our technique extends that of Park and Shin to random variables that are not restricted to infinitely divisible distributions. Among other scenarios, this allows simulation of spatially correlated negative binomials, underdispersed discrete random variables, Poissons, and combinations of these. There are restrictions on the degree of correlation that may be attained, but the method is less restrictive than the commonly used alternative of modeling correlated discrete counts in a Poisson-lognormal hierarchy with correlation derived from embedded correlated normals.

POSTERS

Hierarchical Bayesian Models for Seasonal Radio Telemetry Habitat Data  
Megan Dailey1
and Alix Gitelman2
 
1
Department of Statistics, Colorado State University 2Department of Statistics, Oregon State University 

Radio telemetry data used for habitat selection studies typically consists of a sequence of habitat types for each individual indicating habitat use over time. Existing models for estimating habitat selection probabilities have incorporated covariates in an independent multinomial selections (IMS) model (McCracken et al., 1998) and an extension of the IMS to include a persistence parameter (Ramsey and Usner, 2003). These models assume that all parameters are fixed through time. However, this may not be a realistic assumption in radio telemetry studies that run through multiple seasons. We extend the IMS and persistence models using a hierarchical Bayesian approach that allows for the selection probabilities, the persistence parameter, or both, to change with season. These extensions are particularly important when movement patterns are expected to be different between seasons, or when availability of a habitat changes throughout the study period due to weather or migration. The models are motivated by radio telemetry data for fish in which seasonal differences are expected and evident in the data. 

Distribution Function Estimation in Small Areas for Aquatic Resources  
Mark Delorey
 
Department of Statistics, Colorado State University 

Data from a surface water monitoring program, the Temporally Integrated Monitoring of Ecosystems (TIME) are used to study trends in acid deposition in surface water.  The TIME data consists of a probability sample of lakes and streams.  One of the tools used to evaluate characteristics of acidity is the cumulative distribution function of slope trends (acid concentration/year).  An understanding of the distribution of these slopes helps evaluate the impact of the to the Clean Air Act Amendments of 1990.  For example, the proportion of lakes whose acidic concentration has been decreasing can be estimated.  A hierarchical model is constructed to describe these slopes as functions of available auxiliary information; constrained Bayes techniques are used to estimate the ensemble of slope values.  Spatial relationships are represented by incorporating a conditional autoregressive model into the constrained Bayes methods.

Predicting Temperatures in High Tunnels at different geographical locations using a Statistical Model
Anil K Jayaprakash1,3, Laurie Hodges2,3, Kent Eskridge1, Daryl Travnicek1
1
Department of Statistics; 2Department of Agronomy; 3Department of Horticulture
University of Nebraska, Lincoln

 High tunnels are low cost greenhouses that are often used to extend the growing season for high value crops. Profitability for small-scale intensive farms relies on selection and timing of diverse crops to optimize use of these structures. Temperature conditions within the tunnels depend on manual manipulation of sidewalls for ventilation. One of the limiting factors is, not knowing what conditions to expect within a high tunnel given a set of outside conditions. Hence the time lag for conditions to change inside the tunnel relative to outside conditions would assist in managing the high tunnels especially when the operator is away from the farm.

 The objective here is to analyze temperature conditions in high tunnels located in Lincoln, Nebraska with a statistical model and use the results obtained to predict conditions within high tunnels using external climatic parameters. This model will then be tested against data from similar high tunnels located in Kansas and Missouri. The results obtained can be used to predict soil temperature in a high tunnel environment (based on detrended hourly outside air temperature) over an extended period of time (may extend over several months). The addition of relative humidity and wind speed may improve the precision of this model. The air temperature is modeled as an extended sinusoidal form (Fourier series) and the derived form drives the soil temperature (soil inherits air based on Newtonian cooling). Output from PROC NLIN (SAS Institute) is explained. Also the prediction temperature of one location and its usefulness at other locations are illustrated. 


Microclimate Aspects of High Tunnels
 
Anil K Jayaprakash1,3
and Laurie Hodges2,3
 
1
Department of Statistics, 2Department of Agronomy, and 3Department of Horticulture University of Nebraska, Lincoln, NE 

The environment surrounding a crop, farm or forest and influencing the ecology is termed microclimate. For years producers in the Great Plains have wondered how successful their operation would be if only they could control or moderate the most variable factor affecting crop production -- Mother Nature. As the Central Great Plains has high variable weather with rapid changes in temperature through extremes of zones, high tunnel crop production is a boon for small-scale specialty producers. High tunnels, defined as unheated plastic greenhouses, are used throughout the world to extend the production season for high value crops. They consist of a series of evenly spaced bows, or hoops, that provide the structural support for a plastic covering. They are normally covered with a single layer of 6-mil greenhouse grade polyethylene film, and are vented when temperatures rise by manually rolling up the sides and closing the sides during winter to prevent loss of heat. The closing of sides (in winter/cold season) will result in accumulation of solar radiation. Opening and closing of sidewalls affect the air and soil temperatures or the microclimate. This in turn influences the physical, chemical and biological processes occurring in the soil. Unlike greenhouses that have a mechanized way of regulating temperature, high tunnels are low cost structures and can be manually operated with ease. As microclimate affects the plant growth through variation in temperature, moisture, solar gain and wind speed, the poster looks at the various microclimate aspects and paves way for an economic way of growing and protecting crops.  

Space-time Modeling of Daily Sulfate Levels Combining Observations and CMAQ Output
Mikyoung Jun
and Michael L. Stein  
University of Chicago

The Models-3/Community Multiscale Air Quality (CMAQ) is a numerical and deterministic model which produces multiple air pollutants' concentration and deposition levels over US and some parts of Canada. Our goal is to build a space-time model of observations and CMAQ output of daily sulfate levels which leads us to develop space-time maps of sulfate  concentrations. Certain space-time correlations are used to understand the space-time covariance structures of observations and CMAQ output. These correlations show that developing statistical models of the difference of CMAQ output and observations should be easier than developing models for the observations or CMAQ output directly. Statistical evidence against separable covariance functions, which are often used for space-time or multivariate spatial processes, are also given through these correlations.  Space-time modeling results for the difference of CMAQ output and observations based on various space-time covariance functions will be compared. The development of space-time covariance function on a sphere across time will be discussed.

Approximating the Likelihood through Expert Opinion
Ana Rappold
Institute of Statistics and Decision Sciences, Duke University 

A primary focus of environmental research in recent decades has been detection of anthropogenic impacts on the global climate. In the area of oceanographic research, detection of an anthropogenic signal has been focused around monitoring the heat content and the salinity of the ocean waters. The ocean is believed to be a critical sink of atmospheric heat. As such, any observed long term trends in heat content of the ocean, although weak, have an important role in explaining discrepancies between observed and predicted atmospheric temperatures over the past century of increased greenhouse gases and global warming. In this paper we study the ocean's heat by analyzing the characteristics of the mixed layer M which is defined as the top nearly uniform temperature layer of the surface waters formed through wind driven turbulent mixing and convection. The depth of the mixed layer evolves with the annual cycle of surface temperatures, and is spatially correlated, but by the non-stationary nature of problem it is never possible to have multiple observations under the same conditions. However, we wish to acquire a model P( data, M) based on which we can analyze anomalies in the annual cycle of M over the past century using all available data. Although the sampling distribution is not known, an expert is able to evaluate his/her prior and posterior belief about M, before and after observing a coarse discrete realization of a thermal profile respectively. While the observed thermal profile contains information about M, M by itself does not characterize the distribution of the profile. Therefore, the usually solution ofre-parameterization and integration of nuisance parameters is not appropriate. We propose the following scheme: Find a functional form for g( M | data ) that yields the posterior uncertainty approximate to that of the expert when he/she is not given any other knowledge such as yearday or latitude, which can lead to re-evaluation of her prior. Then the likelihood may be taken to be proportional to this functional form. Such an algorithmic likelihood need not be a normalized probability measure, or have a parameter independent statistic but is robust and is believed to lead the valid posterior uncertainty. The performance of the algorithmic likelihood is compared with one obtained by proper probability modeling. Finally, having the joint distribution P( data, M ) we can now model the spatial-temporal dependence of M with a hierarchical approach.

 

Graybill Conference
June 16-18, 2004
University Park Holiday Inn
Fort Collins, CO 80526
www.stat.colostate.edu/graybillconference
Conference POSTER
email: nsu at stat.colostate.edu Fax: (970)491-7895 Phone: (970)491-5269
Last Updated: Thursday, July 29, 2004