In principle Yes!. Specially if you are using geographical covariates, mapping the occupancy, or predicting occupancy the across the space. Imperfect detection and spatial autocorrelation are two important issues to deal with in ecological sampling.
What is spatial autocorrelation?
Things closer together in space tend to be more similar than things farther apart. Similar to temporal autocorrelation, spatial autocorrelation is the measurement of the potential tendency for similar values to cluster based on proximity. This fact complicates statistical analyses that rely on assumptions of independence.
Positive spatial autocorrelation: if a large value is observed at point A, large values will also be observed at the neighboring points.
Negative spatial autocorrelation: if a large value is observed at point A, small values will be observed at the neighboring points.
Spatial Autocorrelation
What leads to spatial autocorrelation in species distributions?
In the past the way to incorporate spatial autocorrelation in your occupancy model was coding in BUGS or JAGS. More recently some models coded in Stan incorporated spatial autocorrelation with the package ubms, but in in a limited way. Building an occupancy model with spatial autocorrelation in Stan involves adding a spatial random effect to the occupancy submodel, often with a Conditional Autoregressive (CAR) or Gaussian Process (GP) prior. Acknowledging that nearby sites are not independent improves the accuracy of occupancy estimates and their relationship with environmental covariates. However CAR models are best for areal data, like sites organized in a grid (polygons) or counties in a state.
Recently spOccupancy came out, this new R package was designed for efficient fitting of single-species and multi-species spatial occupancy models using Pólya-Gamma data augmentation. It leverages Nearest Neighbor Gaussian Processes (NNGPs) for scalability with large datasets.
This code was adapted from: https://github.com/doserjef/acoustic-spOccupancy-22/blob/main/code/single-species-example.R
datos_distinct<-cam_deploy_image|>distinct(longitude, latitude, deployment_id, samp_year)|>as.data.frame()# Fix NA camera 16datos_distinct[16, ]<-c(-77.2787, 7.73855,"CT-K1-31-124", 2021)projlatlon<-"+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"datos_sf<-st_as_sf( x =datos_distinct, coords =c("longitude","latitude"), crs =projlatlon)mapview(st_jitter(datos_sf, 0.00075), zcol ="samp_year")
Notice we used the function st_jitter() because the points are on top of the previous year.
Extract site covariates
Using the coordinates of the sf object (datos_sf) we put the cameras on top of the covaraies and with the function terra::extract() we get the covariate value.
In this case we used as covariates:
Cattle distribution as number of cows per 10 square kilometer (Gilbert et al. 2018).
# load rastersper_tree_cov<-rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/Veg_Cont_Fields_Yearly_250m_v61/Perc_TreeCov/MOD44B_Perc_TreeCov_2021_065.tif")road_den<-rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/RoadDensity/grip4_total_dens_m_km2.asc")# elev <- rast("D:/CORREGIDAS/elevation_z7.tif")landcov<-rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/LandCover_Type_Yearly_500m_v61/LC1/MCD12Q1_LC1_2021_001.tif")cattle<-rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/Global cattle distribution/5_Ct_2010_Da.tif")# river <- st_read("F:/WCS-CameraTrap/shp/DensidadRios/MCD12Q1_LC1_2001_001_RECLASS_MASK_GRID_3600m_DensDrenSouthAmer.shp")# get elevation map# elevation_detailed <- rast(get_elev_raster(sites, z = 10, clip="bbox", neg_to_na=TRUE))# elevation_detailed <- get_elev_point (datos_sf, src="aws", overwrite=TRUE)# extract covs using points and add to sites# covs <- cbind(sites, terra::extract(SiteCovsRast, sites))per_tre<-terra::extract(per_tree_cov, datos_sf)roads<-terra::extract(road_den, datos_sf)# eleva <- terra::extract(elevation_detailed, sites)land_cov<-terra::extract(landcov, datos_sf)cattle_den<-terra::extract(cattle, datos_sf)#### drop geometrysites<-datos_sf%>%mutate( lat =st_coordinates(.)[, 1], lon =st_coordinates(.)[, 2])%>%st_drop_geometry()|>as.data.frame()# remove decimals convert to factorsites$land_cover<-factor(land_cov$MCD12Q1_LC1_2021_001)# sites$elevation <- eleva$file3be898018c3sites$per_tree_cov<-per_tre$MOD44B_Perc_TreeCov_2021_065# fix 200 isueind<-which(sites$per_tree_cov==200)sites$per_tree_cov[ind]<-0# sites$elevation <- elevation_detailed$elevationsites$roads<-roads$grip4_total_dens_m_km2sites$cattle<-cattle_den[, 2]# write.csv(sites, "C:/CodigoR/CameraTrapCesar/data/katios/stacked/site_covs.csv")
Selecting the first year 2021
Here we use the function detectionHistory() from the package camtrapR to generate species detection histories that can be used later in occupancy analyses, with package unmarked and ubms. detectionHistory() generates detection histories in different formats, with adjustable occasion length and occasion start time and effort covariates. Notice we first need to get the camera operation dates using the function cameraOperation().
Code
# filter first year and make uniquesCToperation_2021<-cam_deploy_image|># multi-season datafilter(samp_year==2021)|>group_by(deployment_id)|>mutate(minStart =min(start_date), maxEnd =max(end_date))|>distinct(longitude, latitude, minStart, maxEnd, samp_year)|>ungroup()|>as.data.frame()# Fix NA camera 16CToperation_2021[16, ]<-c("CT-K1-31-124", -77.2787, 7.73855,"2021-10-10", "2021-12-31", 2021)# make numeric sampling yearCToperation_2021$samp_year<-as.numeric(CToperation_2021$samp_year)# camera operation matrix for _2021# multi-season data. Season1camop_2021<-cameraOperation( CTtable =CToperation_2021, # Tabla de operación stationCol ="deployment_id", # Columna que define la estación setupCol ="minStart", # Columna fecha de colocación retrievalCol ="maxEnd", # Columna fecha de retiro sessionCol ="samp_year", # multi-season column# hasProblems= T, # Hubo fallos de cámaras dateFormat ="%Y-%m-%d")# , #, # Formato de las fechas# cameraCol="CT")# sessionCol= "samp_year")# Generar las historias de detección ---------------------------------------## remove plroblem species# ind <- which(datos_PCF$Species=="Marmosa sp.")# datos_PCF <- datos_PCF[-ind,]# filter y1datay_2021<-cam_deploy_image|>filter(samp_year==2021)# |># filter(samp_year==2022)DetHist_list_2021<-lapply(unique(datay_2021$scientificName), FUN =function(x){detectionHistory( recordTable =datay_2021, # Tabla de registros camOp =camop_2021, # Matriz de operación de cámaras stationCol ="deployment_id", speciesCol ="scientificName", recordDateTimeCol ="timestamp", recordDateTimeFormat ="%Y-%m-%d %H:%M:%S", species =x, # la función reemplaza x por cada una de las especies occasionLength =15, # Colapso de las historias a días day1 ="station", # inicie en la fecha de cada survey datesAsOccasionNames =FALSE, includeEffort =TRUE, scaleEffort =FALSE, unmarkedMultFrameInput =TRUE, timeZone ="America/Bogota")})# namesnames(DetHist_list_2021)<-unique(datay_2021$scientificName)# Finalmente creamos una lista nueva donde estén solo las historias de detecciónylist_2021<-lapply(DetHist_list_2021, FUN =function(x)x$detection_history)# y el esfuerzoeffortlist_2021<-lapply(DetHist_list_2021, FUN =function(x)x$effort)### Danta, Jaguarwhich(names(ylist_2021)=="Tapirus bairdii")#> integer(0)which(names(ylist_2021)=="Panthera onca")#> [1] 5
Fitting a spatial model for the Jaguar
This is a single species, single season spatial occupancy model.
Load the data
Code
jaguar<-read.csv("C:/CodigoR/CameraTrapCesar/data/katios/stacked/y_jaguar_stacked.csv")|>filter(year==2021)# remove one NAjaguar<-jaguar[-15, ]projlatlon<-"+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"datos_jaguar_sf<-st_as_sf( x =jaguar, coords =c("lon","lat"), crs =projlatlon)
Notice we collapsed the events to 15 days in the 2021 sampling season, and to 25 days in the 2022 sampling season, to end with 6 repeated observations in de matrix. In the matrix o1 to o6 are observations and e1 to e6 are sampling effort (observation-detection covariates). Land_cover, per_tree_cov and roads are site covariates (occupancy covariate).
Load and prepare data
First we transform to UTM to get the coordinates in meters. Notice that the coordinates of the cameras are part of the data feeding the model. Next we assembled a list including the coordinates, the detection history data and the covariates.
Code
# 1. Data prep ------------------------------------------------------------# transform to utmdatos_sf_2021_utm<-datos_jaguar_sf|># filter(samp_year==2021) |>st_transform(crs =21891)#|> left_join(jaguar)jaguar_covs<-jaguar[, c(8, 9, 16:19)]# jaguar_covs$year <- as.factor(jaguar_covs$year)jaguar.data<-list()jaguar.data$coords<-st_coordinates(datos_sf_2021_utm)jaguar.data$y<-jaguar[, 2:7]jaguar.data$occ.covs<-jaguar_covsjaguar.data$det.covs<-list(effort =jaguar[10:15])
We are going to fit a non spatial model and a spatial one, both using effort as detection covariate.
Non-spatial, single-species occupancy model
Code
# 2. Model fitting --------------------------------------------------------# Fit a non-spatial, single-species occupancy model.out<-PGOcc( occ.formula =~scale(per_tree_cov)+scale(roads)+scale(cattle), det.formula =~scale(effort), data =jaguar.data, n.samples =50000, n.thin =2, n.burn =5000, n.chains =3, n.report =500)#> ----------------------------------------#> Preparing to run the model#> ----------------------------------------#> ----------------------------------------#> Model description#> ----------------------------------------#> Occupancy model with Polya-Gamma latent#> variable fit with 31 sites.#> #> Samples per Chain: 50000 #> Burn-in: 5000 #> Thinning Rate: 2 #> Number of Chains: 3 #> Total Posterior Samples: 67500 #> #> Source compiled with OpenMP support and model fit using 1 thread(s).#> #> ----------------------------------------#> Chain 1#> ----------------------------------------#> Sampling ... #> Sampled: 500 of 50000, 1.00%#> -------------------------------------------------#> Sampled: 1000 of 50000, 2.00%#> -------------------------------------------------#> Sampled: 1500 of 50000, 3.00%#> -------------------------------------------------#> Sampled: 2000 of 50000, 4.00%#> -------------------------------------------------#> Sampled: 2500 of 50000, 5.00%#> -------------------------------------------------#> Sampled: 3000 of 50000, 6.00%#> -------------------------------------------------#> Sampled: 3500 of 50000, 7.00%#> -------------------------------------------------#> Sampled: 4000 of 50000, 8.00%#> -------------------------------------------------#> Sampled: 4500 of 50000, 9.00%#> -------------------------------------------------#> Sampled: 5000 of 50000, 10.00%#> -------------------------------------------------#> Sampled: 5500 of 50000, 11.00%#> -------------------------------------------------#> Sampled: 6000 of 50000, 12.00%#> -------------------------------------------------#> Sampled: 6500 of 50000, 13.00%#> -------------------------------------------------#> Sampled: 7000 of 50000, 14.00%#> -------------------------------------------------#> Sampled: 7500 of 50000, 15.00%#> -------------------------------------------------#> Sampled: 8000 of 50000, 16.00%#> -------------------------------------------------#> Sampled: 8500 of 50000, 17.00%#> -------------------------------------------------#> Sampled: 9000 of 50000, 18.00%#> -------------------------------------------------#> Sampled: 9500 of 50000, 19.00%#> -------------------------------------------------#> Sampled: 10000 of 50000, 20.00%#> -------------------------------------------------#> Sampled: 10500 of 50000, 21.00%#> -------------------------------------------------#> Sampled: 11000 of 50000, 22.00%#> -------------------------------------------------#> Sampled: 11500 of 50000, 23.00%#> -------------------------------------------------#> Sampled: 12000 of 50000, 24.00%#> -------------------------------------------------#> Sampled: 12500 of 50000, 25.00%#> -------------------------------------------------#> Sampled: 13000 of 50000, 26.00%#> -------------------------------------------------#> Sampled: 13500 of 50000, 27.00%#> -------------------------------------------------#> Sampled: 14000 of 50000, 28.00%#> -------------------------------------------------#> Sampled: 14500 of 50000, 29.00%#> -------------------------------------------------#> Sampled: 15000 of 50000, 30.00%#> -------------------------------------------------#> Sampled: 15500 of 50000, 31.00%#> -------------------------------------------------#> Sampled: 16000 of 50000, 32.00%#> -------------------------------------------------#> Sampled: 16500 of 50000, 33.00%#> -------------------------------------------------#> Sampled: 17000 of 50000, 34.00%#> -------------------------------------------------#> Sampled: 17500 of 50000, 35.00%#> -------------------------------------------------#> Sampled: 18000 of 50000, 36.00%#> -------------------------------------------------#> Sampled: 18500 of 50000, 37.00%#> -------------------------------------------------#> Sampled: 19000 of 50000, 38.00%#> -------------------------------------------------#> Sampled: 19500 of 50000, 39.00%#> -------------------------------------------------#> Sampled: 20000 of 50000, 40.00%#> -------------------------------------------------#> Sampled: 20500 of 50000, 41.00%#> -------------------------------------------------#> Sampled: 21000 of 50000, 42.00%#> -------------------------------------------------#> Sampled: 21500 of 50000, 43.00%#> -------------------------------------------------#> Sampled: 22000 of 50000, 44.00%#> -------------------------------------------------#> Sampled: 22500 of 50000, 45.00%#> -------------------------------------------------#> Sampled: 23000 of 50000, 46.00%#> -------------------------------------------------#> Sampled: 23500 of 50000, 47.00%#> -------------------------------------------------#> Sampled: 24000 of 50000, 48.00%#> -------------------------------------------------#> Sampled: 24500 of 50000, 49.00%#> -------------------------------------------------#> Sampled: 25000 of 50000, 50.00%#> -------------------------------------------------#> Sampled: 25500 of 50000, 51.00%#> -------------------------------------------------#> Sampled: 26000 of 50000, 52.00%#> -------------------------------------------------#> Sampled: 26500 of 50000, 53.00%#> -------------------------------------------------#> Sampled: 27000 of 50000, 54.00%#> -------------------------------------------------#> Sampled: 27500 of 50000, 55.00%#> -------------------------------------------------#> Sampled: 28000 of 50000, 56.00%#> -------------------------------------------------#> Sampled: 28500 of 50000, 57.00%#> -------------------------------------------------#> Sampled: 29000 of 50000, 58.00%#> -------------------------------------------------#> Sampled: 29500 of 50000, 59.00%#> -------------------------------------------------#> Sampled: 30000 of 50000, 60.00%#> -------------------------------------------------#> Sampled: 30500 of 50000, 61.00%#> -------------------------------------------------#> Sampled: 31000 of 50000, 62.00%#> -------------------------------------------------#> Sampled: 31500 of 50000, 63.00%#> -------------------------------------------------#> Sampled: 32000 of 50000, 64.00%#> -------------------------------------------------#> Sampled: 32500 of 50000, 65.00%#> -------------------------------------------------#> Sampled: 33000 of 50000, 66.00%#> -------------------------------------------------#> Sampled: 33500 of 50000, 67.00%#> -------------------------------------------------#> Sampled: 34000 of 50000, 68.00%#> -------------------------------------------------#> Sampled: 34500 of 50000, 69.00%#> -------------------------------------------------#> Sampled: 35000 of 50000, 70.00%#> -------------------------------------------------#> Sampled: 35500 of 50000, 71.00%#> -------------------------------------------------#> Sampled: 36000 of 50000, 72.00%#> -------------------------------------------------#> Sampled: 36500 of 50000, 73.00%#> -------------------------------------------------#> Sampled: 37000 of 50000, 74.00%#> -------------------------------------------------#> Sampled: 37500 of 50000, 75.00%#> -------------------------------------------------#> Sampled: 38000 of 50000, 76.00%#> -------------------------------------------------#> Sampled: 38500 of 50000, 77.00%#> -------------------------------------------------#> Sampled: 39000 of 50000, 78.00%#> -------------------------------------------------#> Sampled: 39500 of 50000, 79.00%#> -------------------------------------------------#> Sampled: 40000 of 50000, 80.00%#> -------------------------------------------------#> Sampled: 40500 of 50000, 81.00%#> -------------------------------------------------#> Sampled: 41000 of 50000, 82.00%#> -------------------------------------------------#> Sampled: 41500 of 50000, 83.00%#> -------------------------------------------------#> Sampled: 42000 of 50000, 84.00%#> -------------------------------------------------#> Sampled: 42500 of 50000, 85.00%#> -------------------------------------------------#> Sampled: 43000 of 50000, 86.00%#> -------------------------------------------------#> Sampled: 43500 of 50000, 87.00%#> -------------------------------------------------#> Sampled: 44000 of 50000, 88.00%#> -------------------------------------------------#> Sampled: 44500 of 50000, 89.00%#> -------------------------------------------------#> Sampled: 45000 of 50000, 90.00%#> -------------------------------------------------#> Sampled: 45500 of 50000, 91.00%#> -------------------------------------------------#> Sampled: 46000 of 50000, 92.00%#> -------------------------------------------------#> Sampled: 46500 of 50000, 93.00%#> -------------------------------------------------#> Sampled: 47000 of 50000, 94.00%#> -------------------------------------------------#> Sampled: 47500 of 50000, 95.00%#> -------------------------------------------------#> Sampled: 48000 of 50000, 96.00%#> -------------------------------------------------#> Sampled: 48500 of 50000, 97.00%#> -------------------------------------------------#> Sampled: 49000 of 50000, 98.00%#> -------------------------------------------------#> Sampled: 49500 of 50000, 99.00%#> -------------------------------------------------#> Sampled: 50000 of 50000, 100.00%#> ----------------------------------------#> Chain 2#> ----------------------------------------#> Sampling ... #> Sampled: 500 of 50000, 1.00%#> -------------------------------------------------#> Sampled: 1000 of 50000, 2.00%#> -------------------------------------------------#> Sampled: 1500 of 50000, 3.00%#> -------------------------------------------------#> Sampled: 2000 of 50000, 4.00%#> -------------------------------------------------#> Sampled: 2500 of 50000, 5.00%#> -------------------------------------------------#> Sampled: 3000 of 50000, 6.00%#> -------------------------------------------------#> Sampled: 3500 of 50000, 7.00%#> -------------------------------------------------#> Sampled: 4000 of 50000, 8.00%#> -------------------------------------------------#> Sampled: 4500 of 50000, 9.00%#> -------------------------------------------------#> Sampled: 5000 of 50000, 10.00%#> -------------------------------------------------#> Sampled: 5500 of 50000, 11.00%#> -------------------------------------------------#> Sampled: 6000 of 50000, 12.00%#> -------------------------------------------------#> Sampled: 6500 of 50000, 13.00%#> -------------------------------------------------#> Sampled: 7000 of 50000, 14.00%#> -------------------------------------------------#> Sampled: 7500 of 50000, 15.00%#> -------------------------------------------------#> Sampled: 8000 of 50000, 16.00%#> -------------------------------------------------#> Sampled: 8500 of 50000, 17.00%#> -------------------------------------------------#> Sampled: 9000 of 50000, 18.00%#> -------------------------------------------------#> Sampled: 9500 of 50000, 19.00%#> -------------------------------------------------#> Sampled: 10000 of 50000, 20.00%#> -------------------------------------------------#> Sampled: 10500 of 50000, 21.00%#> -------------------------------------------------#> Sampled: 11000 of 50000, 22.00%#> -------------------------------------------------#> Sampled: 11500 of 50000, 23.00%#> -------------------------------------------------#> Sampled: 12000 of 50000, 24.00%#> -------------------------------------------------#> Sampled: 12500 of 50000, 25.00%#> -------------------------------------------------#> Sampled: 13000 of 50000, 26.00%#> -------------------------------------------------#> Sampled: 13500 of 50000, 27.00%#> -------------------------------------------------#> Sampled: 14000 of 50000, 28.00%#> -------------------------------------------------#> Sampled: 14500 of 50000, 29.00%#> -------------------------------------------------#> Sampled: 15000 of 50000, 30.00%#> -------------------------------------------------#> Sampled: 15500 of 50000, 31.00%#> -------------------------------------------------#> Sampled: 16000 of 50000, 32.00%#> -------------------------------------------------#> Sampled: 16500 of 50000, 33.00%#> -------------------------------------------------#> Sampled: 17000 of 50000, 34.00%#> -------------------------------------------------#> Sampled: 17500 of 50000, 35.00%#> -------------------------------------------------#> Sampled: 18000 of 50000, 36.00%#> -------------------------------------------------#> Sampled: 18500 of 50000, 37.00%#> -------------------------------------------------#> Sampled: 19000 of 50000, 38.00%#> -------------------------------------------------#> Sampled: 19500 of 50000, 39.00%#> -------------------------------------------------#> Sampled: 20000 of 50000, 40.00%#> -------------------------------------------------#> Sampled: 20500 of 50000, 41.00%#> -------------------------------------------------#> Sampled: 21000 of 50000, 42.00%#> -------------------------------------------------#> Sampled: 21500 of 50000, 43.00%#> -------------------------------------------------#> Sampled: 22000 of 50000, 44.00%#> -------------------------------------------------#> Sampled: 22500 of 50000, 45.00%#> -------------------------------------------------#> Sampled: 23000 of 50000, 46.00%#> -------------------------------------------------#> Sampled: 23500 of 50000, 47.00%#> -------------------------------------------------#> Sampled: 24000 of 50000, 48.00%#> -------------------------------------------------#> Sampled: 24500 of 50000, 49.00%#> -------------------------------------------------#> Sampled: 25000 of 50000, 50.00%#> -------------------------------------------------#> Sampled: 25500 of 50000, 51.00%#> -------------------------------------------------#> Sampled: 26000 of 50000, 52.00%#> -------------------------------------------------#> Sampled: 26500 of 50000, 53.00%#> -------------------------------------------------#> Sampled: 27000 of 50000, 54.00%#> -------------------------------------------------#> Sampled: 27500 of 50000, 55.00%#> -------------------------------------------------#> Sampled: 28000 of 50000, 56.00%#> -------------------------------------------------#> Sampled: 28500 of 50000, 57.00%#> -------------------------------------------------#> Sampled: 29000 of 50000, 58.00%#> -------------------------------------------------#> Sampled: 29500 of 50000, 59.00%#> -------------------------------------------------#> Sampled: 30000 of 50000, 60.00%#> -------------------------------------------------#> Sampled: 30500 of 50000, 61.00%#> -------------------------------------------------#> Sampled: 31000 of 50000, 62.00%#> -------------------------------------------------#> Sampled: 31500 of 50000, 63.00%#> -------------------------------------------------#> Sampled: 32000 of 50000, 64.00%#> -------------------------------------------------#> Sampled: 32500 of 50000, 65.00%#> -------------------------------------------------#> Sampled: 33000 of 50000, 66.00%#> -------------------------------------------------#> Sampled: 33500 of 50000, 67.00%#> -------------------------------------------------#> Sampled: 34000 of 50000, 68.00%#> -------------------------------------------------#> Sampled: 34500 of 50000, 69.00%#> -------------------------------------------------#> Sampled: 35000 of 50000, 70.00%#> -------------------------------------------------#> Sampled: 35500 of 50000, 71.00%#> -------------------------------------------------#> Sampled: 36000 of 50000, 72.00%#> -------------------------------------------------#> Sampled: 36500 of 50000, 73.00%#> -------------------------------------------------#> Sampled: 37000 of 50000, 74.00%#> -------------------------------------------------#> Sampled: 37500 of 50000, 75.00%#> -------------------------------------------------#> Sampled: 38000 of 50000, 76.00%#> -------------------------------------------------#> Sampled: 38500 of 50000, 77.00%#> -------------------------------------------------#> Sampled: 39000 of 50000, 78.00%#> -------------------------------------------------#> Sampled: 39500 of 50000, 79.00%#> -------------------------------------------------#> Sampled: 40000 of 50000, 80.00%#> -------------------------------------------------#> Sampled: 40500 of 50000, 81.00%#> -------------------------------------------------#> Sampled: 41000 of 50000, 82.00%#> -------------------------------------------------#> Sampled: 41500 of 50000, 83.00%#> -------------------------------------------------#> Sampled: 42000 of 50000, 84.00%#> -------------------------------------------------#> Sampled: 42500 of 50000, 85.00%#> -------------------------------------------------#> Sampled: 43000 of 50000, 86.00%#> -------------------------------------------------#> Sampled: 43500 of 50000, 87.00%#> -------------------------------------------------#> Sampled: 44000 of 50000, 88.00%#> -------------------------------------------------#> Sampled: 44500 of 50000, 89.00%#> -------------------------------------------------#> Sampled: 45000 of 50000, 90.00%#> -------------------------------------------------#> Sampled: 45500 of 50000, 91.00%#> -------------------------------------------------#> Sampled: 46000 of 50000, 92.00%#> -------------------------------------------------#> Sampled: 46500 of 50000, 93.00%#> -------------------------------------------------#> Sampled: 47000 of 50000, 94.00%#> -------------------------------------------------#> Sampled: 47500 of 50000, 95.00%#> -------------------------------------------------#> Sampled: 48000 of 50000, 96.00%#> -------------------------------------------------#> Sampled: 48500 of 50000, 97.00%#> -------------------------------------------------#> Sampled: 49000 of 50000, 98.00%#> -------------------------------------------------#> Sampled: 49500 of 50000, 99.00%#> -------------------------------------------------#> Sampled: 50000 of 50000, 100.00%#> ----------------------------------------#> Chain 3#> ----------------------------------------#> Sampling ... #> Sampled: 500 of 50000, 1.00%#> -------------------------------------------------#> Sampled: 1000 of 50000, 2.00%#> -------------------------------------------------#> Sampled: 1500 of 50000, 3.00%#> -------------------------------------------------#> Sampled: 2000 of 50000, 4.00%#> -------------------------------------------------#> Sampled: 2500 of 50000, 5.00%#> -------------------------------------------------#> Sampled: 3000 of 50000, 6.00%#> -------------------------------------------------#> Sampled: 3500 of 50000, 7.00%#> -------------------------------------------------#> Sampled: 4000 of 50000, 8.00%#> -------------------------------------------------#> Sampled: 4500 of 50000, 9.00%#> -------------------------------------------------#> Sampled: 5000 of 50000, 10.00%#> -------------------------------------------------#> Sampled: 5500 of 50000, 11.00%#> -------------------------------------------------#> Sampled: 6000 of 50000, 12.00%#> -------------------------------------------------#> Sampled: 6500 of 50000, 13.00%#> -------------------------------------------------#> Sampled: 7000 of 50000, 14.00%#> -------------------------------------------------#> Sampled: 7500 of 50000, 15.00%#> -------------------------------------------------#> Sampled: 8000 of 50000, 16.00%#> -------------------------------------------------#> Sampled: 8500 of 50000, 17.00%#> -------------------------------------------------#> Sampled: 9000 of 50000, 18.00%#> -------------------------------------------------#> Sampled: 9500 of 50000, 19.00%#> -------------------------------------------------#> Sampled: 10000 of 50000, 20.00%#> -------------------------------------------------#> Sampled: 10500 of 50000, 21.00%#> -------------------------------------------------#> Sampled: 11000 of 50000, 22.00%#> -------------------------------------------------#> Sampled: 11500 of 50000, 23.00%#> -------------------------------------------------#> Sampled: 12000 of 50000, 24.00%#> -------------------------------------------------#> Sampled: 12500 of 50000, 25.00%#> -------------------------------------------------#> Sampled: 13000 of 50000, 26.00%#> -------------------------------------------------#> Sampled: 13500 of 50000, 27.00%#> -------------------------------------------------#> Sampled: 14000 of 50000, 28.00%#> -------------------------------------------------#> Sampled: 14500 of 50000, 29.00%#> -------------------------------------------------#> Sampled: 15000 of 50000, 30.00%#> -------------------------------------------------#> Sampled: 15500 of 50000, 31.00%#> -------------------------------------------------#> Sampled: 16000 of 50000, 32.00%#> -------------------------------------------------#> Sampled: 16500 of 50000, 33.00%#> -------------------------------------------------#> Sampled: 17000 of 50000, 34.00%#> -------------------------------------------------#> Sampled: 17500 of 50000, 35.00%#> -------------------------------------------------#> Sampled: 18000 of 50000, 36.00%#> -------------------------------------------------#> Sampled: 18500 of 50000, 37.00%#> -------------------------------------------------#> Sampled: 19000 of 50000, 38.00%#> -------------------------------------------------#> Sampled: 19500 of 50000, 39.00%#> -------------------------------------------------#> Sampled: 20000 of 50000, 40.00%#> -------------------------------------------------#> Sampled: 20500 of 50000, 41.00%#> -------------------------------------------------#> Sampled: 21000 of 50000, 42.00%#> -------------------------------------------------#> Sampled: 21500 of 50000, 43.00%#> -------------------------------------------------#> Sampled: 22000 of 50000, 44.00%#> -------------------------------------------------#> Sampled: 22500 of 50000, 45.00%#> -------------------------------------------------#> Sampled: 23000 of 50000, 46.00%#> -------------------------------------------------#> Sampled: 23500 of 50000, 47.00%#> -------------------------------------------------#> Sampled: 24000 of 50000, 48.00%#> -------------------------------------------------#> Sampled: 24500 of 50000, 49.00%#> -------------------------------------------------#> Sampled: 25000 of 50000, 50.00%#> -------------------------------------------------#> Sampled: 25500 of 50000, 51.00%#> -------------------------------------------------#> Sampled: 26000 of 50000, 52.00%#> -------------------------------------------------#> Sampled: 26500 of 50000, 53.00%#> -------------------------------------------------#> Sampled: 27000 of 50000, 54.00%#> -------------------------------------------------#> Sampled: 27500 of 50000, 55.00%#> -------------------------------------------------#> Sampled: 28000 of 50000, 56.00%#> -------------------------------------------------#> Sampled: 28500 of 50000, 57.00%#> -------------------------------------------------#> Sampled: 29000 of 50000, 58.00%#> -------------------------------------------------#> Sampled: 29500 of 50000, 59.00%#> -------------------------------------------------#> Sampled: 30000 of 50000, 60.00%#> -------------------------------------------------#> Sampled: 30500 of 50000, 61.00%#> -------------------------------------------------#> Sampled: 31000 of 50000, 62.00%#> -------------------------------------------------#> Sampled: 31500 of 50000, 63.00%#> -------------------------------------------------#> Sampled: 32000 of 50000, 64.00%#> -------------------------------------------------#> Sampled: 32500 of 50000, 65.00%#> -------------------------------------------------#> Sampled: 33000 of 50000, 66.00%#> -------------------------------------------------#> Sampled: 33500 of 50000, 67.00%#> -------------------------------------------------#> Sampled: 34000 of 50000, 68.00%#> -------------------------------------------------#> Sampled: 34500 of 50000, 69.00%#> -------------------------------------------------#> Sampled: 35000 of 50000, 70.00%#> -------------------------------------------------#> Sampled: 35500 of 50000, 71.00%#> -------------------------------------------------#> Sampled: 36000 of 50000, 72.00%#> -------------------------------------------------#> Sampled: 36500 of 50000, 73.00%#> -------------------------------------------------#> Sampled: 37000 of 50000, 74.00%#> -------------------------------------------------#> Sampled: 37500 of 50000, 75.00%#> -------------------------------------------------#> Sampled: 38000 of 50000, 76.00%#> -------------------------------------------------#> Sampled: 38500 of 50000, 77.00%#> -------------------------------------------------#> Sampled: 39000 of 50000, 78.00%#> -------------------------------------------------#> Sampled: 39500 of 50000, 79.00%#> -------------------------------------------------#> Sampled: 40000 of 50000, 80.00%#> -------------------------------------------------#> Sampled: 40500 of 50000, 81.00%#> -------------------------------------------------#> Sampled: 41000 of 50000, 82.00%#> -------------------------------------------------#> Sampled: 41500 of 50000, 83.00%#> -------------------------------------------------#> Sampled: 42000 of 50000, 84.00%#> -------------------------------------------------#> Sampled: 42500 of 50000, 85.00%#> -------------------------------------------------#> Sampled: 43000 of 50000, 86.00%#> -------------------------------------------------#> Sampled: 43500 of 50000, 87.00%#> -------------------------------------------------#> Sampled: 44000 of 50000, 88.00%#> -------------------------------------------------#> Sampled: 44500 of 50000, 89.00%#> -------------------------------------------------#> Sampled: 45000 of 50000, 90.00%#> -------------------------------------------------#> Sampled: 45500 of 50000, 91.00%#> -------------------------------------------------#> Sampled: 46000 of 50000, 92.00%#> -------------------------------------------------#> Sampled: 46500 of 50000, 93.00%#> -------------------------------------------------#> Sampled: 47000 of 50000, 94.00%#> -------------------------------------------------#> Sampled: 47500 of 50000, 95.00%#> -------------------------------------------------#> Sampled: 48000 of 50000, 96.00%#> -------------------------------------------------#> Sampled: 48500 of 50000, 97.00%#> -------------------------------------------------#> Sampled: 49000 of 50000, 98.00%#> -------------------------------------------------#> Sampled: 49500 of 50000, 99.00%#> -------------------------------------------------#> Sampled: 50000 of 50000, 100.00%summary(out)#> #> Call:#> PGOcc(occ.formula = ~scale(per_tree_cov) + scale(roads) + scale(cattle), #> det.formula = ~scale(effort), data = jaguar.data, n.samples = 50000, #> n.report = 500, n.burn = 5000, n.thin = 2, n.chains = 3)#> #> Samples per Chain: 50000#> Burn-in: 5000#> Thinning Rate: 2#> Number of Chains: 3#> Total Posterior Samples: 67500#> Run Time (min): 0.2357#> #> Occurrence (logit scale): #> Mean SD 2.5% 50% 97.5% Rhat ESS#> (Intercept) -0.2903 1.2359 -2.2379 -0.4809 2.5691 1.0003 5052#> scale(per_tree_cov) -0.8858 0.9595 -3.1113 -0.7618 0.7359 1.0002 12404#> scale(roads) 0.1216 0.8368 -1.5802 0.1150 1.8346 1.0002 13728#> scale(cattle) -1.0761 1.2691 -3.5833 -1.0551 1.6764 1.0006 10193#> #> Detection (logit scale): #> Mean SD 2.5% 50% 97.5% Rhat ESS#> (Intercept) -2.2010 0.6126 -3.3767 -2.2152 -0.9890 1.0005 7092#> scale(effort) 0.9317 0.6986 -0.1411 0.8276 2.5666 1.0019 15301
Spatial, single-species occupancy model
Code
# Fit a spatial, single-species occupancy model.out.sp<-spPGOcc( occ.formula =~scale(per_tree_cov)+scale(roads)+scale(cattle), det.formula =~scale(effort), data =jaguar.data, n.neighbors =8, n.batch =1000, batch.length =15,# n.samples = 105000, n.thin =2, n.burn =5000, n.chains =3, n.report =500)#> ----------------------------------------#> Preparing to run the model#> ----------------------------------------#> ----------------------------------------#> Building the neighbor list#> ----------------------------------------#> ----------------------------------------#> Building the neighbors of neighbors list#> ----------------------------------------#> ----------------------------------------#> Model description#> ----------------------------------------#> NNGP Spatial Occupancy model with Polya-Gamma latent#> variable fit with 31 sites.#> #> Samples per chain: 15000 (1000 batches of length 15)#> Burn-in: 5000 #> Thinning Rate: 2 #> Number of Chains: 3 #> Total Posterior Samples: 15000 #> #> Using the exponential spatial correlation model.#> #> Using 8 nearest neighbors.#> #> Source compiled with OpenMP support and model fit using 1 thread(s).#> #> Adaptive Metropolis with target acceptance rate: 43.0#> ----------------------------------------#> Chain 1#> ----------------------------------------#> Sampling ... #> Batch: 500 of 1000, 50.00%#> Parameter Acceptance Tuning#> phi 40.0 3.22199#> -------------------------------------------------#> Batch: 1000 of 1000, 100.00%#> ----------------------------------------#> Chain 2#> ----------------------------------------#> Sampling ... #> Batch: 500 of 1000, 50.00%#> Parameter Acceptance Tuning#> phi 60.0 3.35348#> -------------------------------------------------#> Batch: 1000 of 1000, 100.00%#> ----------------------------------------#> Chain 3#> ----------------------------------------#> Sampling ... #> Batch: 500 of 1000, 50.00%#> Parameter Acceptance Tuning#> phi 13.3 3.03436#> -------------------------------------------------#> Batch: 1000 of 1000, 100.00%summary(out.sp)#> #> Call:#> spPGOcc(occ.formula = ~scale(per_tree_cov) + scale(roads) + scale(cattle), #> det.formula = ~scale(effort), data = jaguar.data, n.neighbors = 8, #> n.batch = 1000, batch.length = 15, n.report = 500, n.burn = 5000, #> n.thin = 2, n.chains = 3)#> #> Samples per Chain: 15000#> Burn-in: 5000#> Thinning Rate: 2#> Number of Chains: 3#> Total Posterior Samples: 15000#> Run Time (min): 0.2518#> #> Occurrence (logit scale): #> Mean SD 2.5% 50% 97.5% Rhat ESS#> (Intercept) -0.4121 1.2646 -2.4663 -0.5876 2.4283 1.0101 1095#> scale(per_tree_cov) -0.9107 0.9959 -3.1644 -0.8002 0.8238 1.0002 2393#> scale(roads) 0.1835 0.8791 -1.5658 0.1602 2.0462 1.0028 2802#> scale(cattle) -1.0747 1.2490 -3.6105 -1.0372 1.5406 1.0030 2608#> #> Detection (logit scale): #> Mean SD 2.5% 50% 97.5% Rhat ESS#> (Intercept) -2.1603 0.6055 -3.3162 -2.1735 -0.9694 1.0102 1456#> scale(effort) 0.9237 0.6830 -0.1340 0.8241 2.5300 1.0004 3999#> #> Spatial Covariance: #> Mean SD 2.5% 50% 97.5% Rhat ESS#> sigma.sq 1.3810 3.0588 0.1861 0.6701 7.0483 1.1559 582#> phi 0.0202 0.0113 0.0012 0.0204 0.0388 1.0005 2476
Model validation
We perform a posterior predictive check to assess model fit.
Code
# 3. Model validation -----------------------------------------------------# Perform a posterior predictive check to assess model fit. ppc.out<-ppcOcc(out, fit.stat ='freeman-tukey', group =1)ppc.out.sp<-ppcOcc(out.sp, fit.stat ='freeman-tukey', group =1)# Calculate a Bayesian p-value as a simple measure of Goodness of Fit.# Bayesian p-values between 0.1 and 0.9 indicate adequate model fit. summary(ppc.out)#> #> Call:#> ppcOcc(object = out, fit.stat = "freeman-tukey", group = 1)#> #> Samples per Chain: 50000#> Burn-in: 5000#> Thinning Rate: 2#> Number of Chains: 3#> Total Posterior Samples: 67500#> #> Bayesian p-value: 0.2724 #> Fit statistic: freeman-tukeysummary(ppc.out.sp)#> #> Call:#> ppcOcc(object = out.sp, fit.stat = "freeman-tukey", group = 1)#> #> Samples per Chain: 15000#> Burn-in: 5000#> Thinning Rate: 2#> Number of Chains: 3#> Total Posterior Samples: 15000#> #> Bayesian p-value: 0.2818 #> Fit statistic: freeman-tukey# ## see model selection as a table# datatable( # round(modSel(mods), 3)# )
Model comparison
Lets compare the two models, the non spatial and the spatial one.
Code
# 4. Model comparison -----------------------------------------------------# Compute Widely Applicable Information Criterion (WAIC)# Lower values indicate better model fit. waicOcc(out)#> elpd pD WAIC #> -32.51043 3.57828 72.17741waicOcc(out.sp)#> elpd pD WAIC #> -31.121864 4.526053 71.295834
look at the Widely Applicable Information Criterion (WAIC). Lower values indicate better model fit!
Best model is out.sp (out.sp) which deals with spatial autocorrelation.
Look at the traceplots
For the spatial model.
Code
MCMCtrace(out.sp$beta.samples, params =c("scale(per_tree_cov)"), type ='trace', pdf =F, Rhat =TRUE, n.eff =TRUE)
Code
MCMCtrace(out.sp$beta.samples, type ='both', pdf =F, Rhat =FALSE, n.eff =TRUE)
Code
### density for per_tree_covMCMCtrace(out.sp$beta.samples, params =c("scale(per_tree_cov)"), ISB =FALSE, pdf =F, exact =TRUE, post_zm =TRUE, type ='density', Rhat =TRUE, n.eff =TRUE, ind =TRUE)
All rhat values should, in theory, be less than 1.1, if the sampler has values of or greater than 1.1, it is likely that it was not particularly efficient or effective.
Predict occupancy along a gradient of per_tree_cov. The prediction takes in to account the spatial autocorrelation.
Code
# 6. Prediction -----------------------------------------------------------# Predict occupancy along a gradient of forest cover.# Create a set of values across the range of observed forest valuesforest.pred.vals<-seq(min(jaguar.data$occ.covs$per_tree_cov),max(jaguar.data$occ.covs$per_tree_cov), length.out =100)# Scale predicted values by mean and standard deviation used to fit the modelforest.pred.vals.scale<-(forest.pred.vals-mean(jaguar.data$occ.covs$per_tree_cov))/sd(jaguar.data$occ.covs$per_tree_cov)# Predict occupancy across forest values at mean values of all other variablespred.df<-as.matrix(data.frame(intercept =1, forest =forest.pred.vals.scale, roads =0, cattle =0))out.pred<-predict(out, pred.df)str(out.pred)#> List of 3#> $ psi.0.samples: 'mcmc' num [1:67500, 1:100] 0.908 0.844 0.683 0.903 0.94 ...#> ..- attr(*, "mcpar")= num [1:3] 1 67500 1#> $ z.0.samples : 'mcmc' int [1:67500, 1:100] 1 1 0 1 1 0 1 1 1 1 ...#> ..- attr(*, "mcpar")= num [1:3] 1 67500 1#> $ call : language predict.PGOcc(object = out, X.0 = pred.df)#> - attr(*, "class")= chr "predict.PGOcc"psi.0.quants<-apply(out.pred$psi.0.samples, 2, quantile, prob =c(0.025, 0.5, 0.975))psi.plot.dat<-data.frame( psi.med =psi.0.quants[2, ], psi.low =psi.0.quants[1, ], psi.high =psi.0.quants[3, ], forest =forest.pred.vals)ggplot(psi.plot.dat, aes(x =forest, y =psi.med))+geom_ribbon(aes(ymin =psi.low, ymax =psi.high), fill ="grey70")+geom_line()+theme_bw()+scale_y_continuous(limits =c(0, 1))+labs(x ="Forest (% tree cover)", y ="Occupancy Probability")
See the huge errors…. well it is just for illustrative purposes…
Allaire, JJ, Yihui Xie, Christophe Dervieux, Jonathan McPherson, Javier Luraschi, Kevin Ushey, Aron Atkins, et al. 2024. rmarkdown: Dynamic Documents for r. https://github.com/rstudio/rmarkdown.
Appelhans, Tim, Florian Detsch, Christoph Reudenbach, and Stefan Woellauer. 2023. mapview: Interactive Viewing of Spatial Data in r. https://CRAN.R-project.org/package=mapview.
Doser, Jeffrey W., Andrew O. Finley, and Sudipto Banerjee. 2023. “Joint Species Distribution Models with Imperfect Detection for High-Dimensional Spatial Data.”Ecology, e4137. https://doi.org/10.1002/ecy.4137.
Doser, Jeffrey W., Andrew O. Finley, Marc Kéry, and Elise F. Zipkin. 2022. “spOccupancy: An r Package for Single-Species, Multi-Species, and Integrated Spatial Occupancy Models.”Methods in Ecology and Evolution 13: 1670–78. https://doi.org/10.1111/2041-210X.13897.
Doser, Jeffrey W., Andrew O. Finley, Sarah P. Saunders, Marc Kéry, Aaron S. Weed, and Elise F. Zipkin. 2024. “Modeling Complex Species-Environment Relationships Through Spatially-Varying Coefficient Occupancy Models.”Journal of Agricultural, Biological, and Environmental Statistics. https://doi.org/10.1007/s13253-023-00595-6.
Gabry, Jonah, Daniel Simpson, Aki Vehtari, Michael Betancourt, and Andrew Gelman. 2019. “Visualization in Bayesian Workflow.”J. R. Stat. Soc. A 182: 389–402. https://doi.org/10.1111/rssa.12378.
Gilbert, Marius, Gaëlle Nicolas, Giusepina Cinardi, Thomas P Van Boeckel, Sophie O Vanwambeke, G R William Wint, and Timothy P Robinson. 2018. “Global distribution data for cattle, buffaloes, horses, sheep, goats, pigs, chickens and ducks in 2010.”Scientific Data 5 (1): 180227. https://doi.org/10.1038/sdata.2018.227.
Hollister, Jeffrey, Tarak Shah, Jakub Nowosad, Alec L. Robitaille, Marcus W. Beck, and Mike Johnson. 2023. elevatr: Access Elevation Data from Various APIs. https://doi.org/10.5281/zenodo.8335450.
Meijer, Johan R, Mark A J Huijbregts, Kees C G J Schotten, and Aafke M Schipper. 2018. “Global patterns of current and future road infrastructure.”Environmental Research Letters 13 (6): 64006. https://doi.org/10.1088/1748-9326/aabd42.
Niedballa, Jürgen, Rahel Sollmann, Alexandre Courtiol, and Andreas Wilting. 2016. “camtrapR: An r Package for Efficient Camera Trap Data Management.”Methods in Ecology and Evolution 7 (12): 1457–62. https://doi.org/10.1111/2041-210X.12600.
Pebesma, Edzer. 2018. “Simple Features for R: Standardized Support for Spatial Vector Data.”The R Journal 10 (1): 439–46. https://doi.org/10.32614/RJ-2018-009.
R Core Team. 2024. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy D’Agostino McGowan, Romain François, Garrett Grolemund, et al. 2019. “Welcome to the tidyverse.”Journal of Open Source Software 4 (43): 1686. https://doi.org/10.21105/joss.01686.
Xie, Yihui, J. J. Allaire, and Garrett Grolemund. 2018. R Markdown: The Definitive Guide. Boca Raton, Florida: Chapman; Hall/CRC. https://bookdown.org/yihui/rmarkdown.
Youngflesh, Casey. 2018. “MCMCvis: Tools to Visualize, Manipulate, and Summarize MCMC Output.”Journal of Open Source Software 3 (24): 640. https://doi.org/10.21105/joss.00640.