Spatial, single-species occupancy model

Dealing with spatial autocorrelation with the spOccupancy package

The package spOccupancy accommodate spatial autocorrelation efficiently in a workflow completely in R (no Bayesian programming languages necessary)
R
occupancy
spOccupancy
spatial
Authors
Affiliations

Diego J. Lizcano

José F. González-Maya

Published

June 1, 2025

Shoud I use a spatial model?

In principle Yes!. Specially if you are using geographical covariates. Imperfect detection and spatial autocorrelation are two important issues to deal with in ecological sampling.

What is spatial autocorrelation?

Things closer together in space tend to be more similar than things farther apart.

What leads to spatial autocorrelation in species distributions?

  • Environmental drivers
  • Biotic factors (e.g., dispersal, conspecific attraction).

In the past the way to incorporate spatial autocorrelation in your occupancy model was coding in BUGS or JAGS.

this code was adapted from: https://github.com/doserjef/acoustic-spOccupancy-22/blob/main/code/single-species-example.R

Load packages

First we load some packages

Code

library(grateful) # Facilitate Citation of R Packages
library(patchwork) # The Composer of Plots
library(readxl) # Read Excel Files
library(sf) # Simple Features for R
library(mapview) # Interactive Viewing of Spatial Data in R
library(terra) # Spatial Data Analysis
library(elevatr) # Access Elevation Data from Various APIs
library(readr) # read csv files 

library(camtrapR) # Camera Trap Data Management and Preparation of Occupancy and Spatial Capture-Recapture Analyses 
library(spOccupancy)
library(MCMCvis) # Markov chains viewer
library(bayesplot)
library(DT) # nice tables

library(kableExtra) # Construct Complex Table with 'kable' and Pipe Syntax
library(tidyverse) # Load the 'Tidyverse'

Load data

The data set is downloaded from Initiative Monitoreo Katios in Wildlife insights were we sampled with an array of 30 cameras on two consecutive years in Katios National Park in Colombia.

Initiative Monitoreo Katios
Code

path <- "C:/CodigoR/CameraTrapCesar/data/katios/"
cameras <- read_csv(paste(path, "cameras.csv", sep=""))
deployment <- read_csv(paste(path, "deployments.csv", sep=""))
images <- read_csv(paste(path, "images.csv", sep=""))
project <- read_csv(paste(path, "projects.csv", sep=""))

# join_by(project_id, camera_id, camera_name)`
cam_deploy <- cameras |> left_join(deployment) |> 
  dplyr::mutate(year=lubridate::year(start_date)) #|> filter(year== 2023)
cam_deploy_image <- images  |> 
  left_join(cam_deploy) |> 
  mutate(scientificName= paste(genus, species, sep = " ")) |> 
   mutate(deployment_id_cam=paste(deployment_id, camera_id, sep = "-")) #|> 
  # filter(year==2022)

Convert to sf and view the map

Code

datos_distinct <- cam_deploy_image |> distinct(longitude, latitude, deployment_id, samp_year) |> as.data.frame()

# Fix NA camera 16
datos_distinct[16,] <- c( -77.2787, 7.73855, 
                      "CT-K1-31-124", 2021)

projlatlon <- "+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"

datos_sf <-  st_as_sf(x = datos_distinct,
                         coords = c("longitude", 
                                    "latitude"),
                         crs = projlatlon) 

mapview(st_jitter(datos_sf, 0.00075) , zcol="samp_year")

Notice we used the function st_jitter() because the points are on top of the previous year.

Extract site covariates

Using the coordinates of the sf object (datos_sf) we put the cameras on top of the covaraies and with the function terra::extract() we get the covariate value.

In this case we used as covariates:

Code
#load rasters
per_tree_cov <- rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/Veg_Cont_Fields_Yearly_250m_v61/Perc_TreeCov/MOD44B_Perc_TreeCov_2021_065.tif")
road_den <- rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/RoadDensity/grip4_total_dens_m_km2.asc")
# elev <- rast("D:/CORREGIDAS/elevation_z7.tif")
landcov <- rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/LandCover_Type_Yearly_500m_v61/LC1/MCD12Q1_LC1_2021_001.tif") 
cattle <- rast("C:/CodigoR/WCS-CameraTrap/raster/latlon/Global cattle distribution/5_Ct_2010_Da.tif")
#river <- st_read("F:/WCS-CameraTrap/shp/DensidadRios/MCD12Q1_LC1_2001_001_RECLASS_MASK_GRID_3600m_DensDrenSouthAmer.shp")

# get elevation map
# elevation_detailed <- rast(get_elev_raster(sites, z = 10, clip="bbox", neg_to_na=TRUE))
# elevation_detailed <- get_elev_point (datos_sf, src="aws", overwrite=TRUE)


# extract covs using points and add to sites
# covs <- cbind(sites, terra::extract(SiteCovsRast, sites))
per_tre <- terra::extract(per_tree_cov, datos_sf)
roads <- terra::extract(road_den, datos_sf)
# eleva <- terra::extract(elevation_detailed, sites)
land_cov <- terra::extract(landcov, datos_sf)
cattle_den <-  terra::extract(cattle, datos_sf)

#### drop geometry 
sites <- datos_sf %>%
  mutate(lat = st_coordinates(.)[,1],
         lon = st_coordinates(.)[,2]) %>%
  st_drop_geometry() |> as.data.frame()

# remove decimals convert to factor
sites$land_cover <-  factor(land_cov$MCD12Q1_LC1_2021_001)
# sites$elevation <-  eleva$file3be898018c3
sites$per_tree_cov <- per_tre$MOD44B_Perc_TreeCov_2021_065 
#  fix 200 isue
ind <- which(sites$per_tree_cov== 200)
sites$per_tree_cov[ind] <- 0

# sites$elevation <- elevation_detailed$elevation
sites$roads <- roads$grip4_total_dens_m_km2
sites$cattle <- cattle_den[,2]


# write.csv(sites, "C:/CodigoR/CameraTrapCesar/data/katios/stacked/site_covs.csv")

First year 2021

Here we use the function detectionHistory() from the package camtrapR to generate species detection histories that can be used later in occupancy analyses, with package unmarked and ubms. detectionHistory() generates detection histories in different formats, with adjustable occasion length and occasion start time and effort covariates. Notice we first need to get the camera operation dates using the function cameraOperation().

Code

# filter first year and make uniques

CToperation_2021  <- cam_deploy_image |> #multi-season data
  filter(samp_year==2021) |> 
  group_by(deployment_id) |> 
  mutate(minStart=min(start_date), maxEnd=max(end_date)) |> 
  distinct(longitude, latitude, minStart, maxEnd, samp_year) |> 
  ungroup() |> as.data.frame()


# Fix NA camera 16
CToperation_2021[16,] <- c("CT-K1-31-124", -77.2787,    7.73855, 
                      "2021-10-10", "2021-12-31", 2021)

# make numeric sampling year
CToperation_2021$samp_year <- as.numeric(CToperation_2021$samp_year)

# camera operation matrix for _2021
# multi-season data. Season1
camop_2021 <- cameraOperation(CTtable= CToperation_2021, # Tabla de operación
                         stationCol= "deployment_id", # Columna que define la estación
                         setupCol= "minStart", #Columna fecha de colocación
                         retrievalCol= "maxEnd", #Columna fecha de retiro
                         sessionCol = "samp_year", # multi-season column
                         #hasProblems= T, # Hubo fallos de cámaras
                         dateFormat= "%Y-%m-%d")#, #, # Formato de las fechas
                         #cameraCol="CT")
                         #sessionCol= "samp_year")

# Generar las historias de detección ---------------------------------------
## remove plroblem species
# ind <- which(datos_PCF$Species=="Marmosa sp.")
# datos_PCF <- datos_PCF[-ind,]

# filter y1
datay_2021 <- cam_deploy_image |> filter(samp_year ==2021) # |> 
  # filter(samp_year==2022) 

DetHist_list_2021 <- lapply(unique(datay_2021$scientificName), FUN = function(x) {
  detectionHistory(
    recordTable         = datay_2021, # Tabla de registros
    camOp                = camop_2021, # Matriz de operación de cámaras
    stationCol           = "deployment_id",
    speciesCol           = "scientificName",
    recordDateTimeCol    = "timestamp",
    recordDateTimeFormat  = "%Y-%m-%d %H:%M:%S",
    species              = x,     # la función reemplaza x por cada una de las especies
    occasionLength       = 15, # Colapso de las historias a días
    day1                 = "station", #inicie en la fecha de cada survey
    datesAsOccasionNames = FALSE,
    includeEffort        = TRUE,
    scaleEffort          = FALSE,
    unmarkedMultFrameInput=TRUE,
    timeZone             = "America/Bogota" 
    )
  }
)

# names
names(DetHist_list_2021) <- unique(datay_2021$scientificName)

# Finalmente creamos una lista nueva donde estén solo las historias de detección
ylist_2021 <- lapply(DetHist_list_2021, FUN = function(x) x$detection_history)
# y el esfuerzo
effortlist_2021 <- lapply(DetHist_list_2021, FUN = function(x) x$effort)

### Danta, Jaguar
which(names(ylist_2021) =="Tapirus bairdii")
#> integer(0)
which(names(ylist_2021) =="Panthera onca") 
#> [1] 5

Fitting a spatial model for the Jaguar

Load the data

Code
jaguar <- read.csv("C:/CodigoR/CameraTrapCesar/data/katios/stacked/y_jaguar_stacked.csv") |> filter(year==2021) 

# remove NA
jaguar <- jaguar[-15,]


projlatlon <- "+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"

datos_jaguar_sf <-  st_as_sf(x = jaguar,
                         coords = c("lon", 
                                    "lat"),
                         crs = projlatlon) 

Look at the data

Code

datatable(head(jaguar))

Notice we collapsed the events to 15 days in the 2021 sampling season, and to 25 days in the 2022 sampling season, to end with 6 repeated observations in de matrix. In the matrix o1 to o6 are observations and e1 to e6 are sampling effort (observation-detection covariates). Land_cover, per_tree_cov and roads are site covariates (occupancy covariate).

Load and prepare data

Code

# 1. Data prep ------------------------------------------------------------

# transform to utm
datos_sf_2021_utm <- datos_jaguar_sf  |> # filter(samp_year==2021) |>
  st_transform(crs =21891) #|> left_join(jaguar)

jaguar_covs <- jaguar[,c(8,9,16:19)]
#jaguar_covs$year <- as.factor(jaguar_covs$year)


jaguar.data <- list()
jaguar.data$coords <- st_coordinates(datos_sf_2021_utm)
jaguar.data$y <- jaguar[,2:7]
jaguar.data$occ.covs <- jaguar_covs
jaguar.data$det.covs <- list(effort=jaguar[10:15])

Notice the structure of the data

It is a list!

Code
glimpse(jaguar.data)
#> List of 4
#>  $ coords  : num [1:31, 1:2] 2440383 2442762 2442674 2443361 2439344 ...
#>   ..- attr(*, "dimnames")=List of 2
#>   .. ..$ : NULL
#>   .. ..$ : chr [1:2] "X" "Y"
#>  $ y       :'data.frame':    31 obs. of  6 variables:
#>   ..$ o1: int [1:31] 1 1 0 0 0 0 0 1 0 0 ...
#>   ..$ o2: int [1:31] 1 0 0 0 0 0 0 0 0 0 ...
#>   ..$ o3: int [1:31] 0 1 0 0 0 0 0 0 0 0 ...
#>   ..$ o4: int [1:31] 0 0 0 0 0 0 0 0 0 0 ...
#>   ..$ o5: int [1:31] 0 0 0 0 0 0 0 0 0 0 ...
#>   ..$ o6: int [1:31] 0 0 0 0 0 0 0 0 0 0 ...
#>  $ occ.covs:'data.frame':    31 obs. of  6 variables:
#>   ..$ site        : int [1:31] 1 2 3 4 5 6 7 8 9 10 ...
#>   ..$ year        : int [1:31] 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 ...
#>   ..$ land_cover  : int [1:31] 2 2 2 2 2 2 2 2 2 2 ...
#>   ..$ per_tree_cov: int [1:31] 65 70 76 75 73 75 65 71 71 74 ...
#>   ..$ roads       : int [1:31] 152 152 152 152 152 152 152 152 152 152 ...
#>   ..$ cattle      : num [1:31] 0 0 0 0 0 0 0 0 0 0 ...
#>  $ det.covs:List of 1
#>   ..$ effort:'data.frame':   31 obs. of  6 variables:
#>   .. ..$ e1: num [1:31] 14.5 14.5 14.5 14.5 14.5 14.5 14.5 14.5 14.5 14.5 ...
#>   .. ..$ e2: int [1:31] 15 15 15 15 15 15 15 15 15 15 ...
#>   .. ..$ e3: int [1:31] 15 15 15 15 15 15 15 15 15 15 ...
#>   .. ..$ e4: int [1:31] 15 15 15 15 15 15 15 15 15 15 ...
#>   .. ..$ e5: int [1:31] 15 15 15 15 15 15 15 15 15 15 ...
#>   .. ..$ e6: int [1:31] 5 8 3 7 3 7 3 2 7 7 ...

with the names coords, y, occ.covs, det.covs.

Fit models

Code
# 2. Model fitting --------------------------------------------------------
# Fit a non-spatial, single-species occupancy model. 
out <- PGOcc(occ.formula = ~ scale(per_tree_cov) + scale(roads) + 
                               scale(cattle), 
             det.formula = ~ scale(effort), 
               data = jaguar.data, 
               n.samples = 50000, 
               n.thin = 2, 
               n.burn = 5000, 
               n.chains = 3,
               n.report = 500)
#> ----------------------------------------
#>  Preparing to run the model
#> ----------------------------------------
#> ----------------------------------------
#>  Model description
#> ----------------------------------------
#> Occupancy model with Polya-Gamma latent
#> variable fit with 31 sites.
#> 
#> Samples per Chain: 50000 
#> Burn-in: 5000 
#> Thinning Rate: 2 
#> Number of Chains: 3 
#> Total Posterior Samples: 67500 
#> 
#> Source compiled with OpenMP support and model fit using 1 thread(s).
#> 
#> ----------------------------------------
#>  Chain 1
#> ----------------------------------------
#> Sampling ... 
#> Sampled: 500 of 50000, 1.00%
#> -------------------------------------------------
#> Sampled: 1000 of 50000, 2.00%
#> -------------------------------------------------
#> Sampled: 1500 of 50000, 3.00%
#> -------------------------------------------------
#> Sampled: 2000 of 50000, 4.00%
#> -------------------------------------------------
#> Sampled: 2500 of 50000, 5.00%
#> -------------------------------------------------
#> Sampled: 3000 of 50000, 6.00%
#> -------------------------------------------------
#> Sampled: 3500 of 50000, 7.00%
#> -------------------------------------------------
#> Sampled: 4000 of 50000, 8.00%
#> -------------------------------------------------
#> Sampled: 4500 of 50000, 9.00%
#> -------------------------------------------------
#> Sampled: 5000 of 50000, 10.00%
#> -------------------------------------------------
#> Sampled: 5500 of 50000, 11.00%
#> -------------------------------------------------
#> Sampled: 6000 of 50000, 12.00%
#> -------------------------------------------------
#> Sampled: 6500 of 50000, 13.00%
#> -------------------------------------------------
#> Sampled: 7000 of 50000, 14.00%
#> -------------------------------------------------
#> Sampled: 7500 of 50000, 15.00%
#> -------------------------------------------------
#> Sampled: 8000 of 50000, 16.00%
#> -------------------------------------------------
#> Sampled: 8500 of 50000, 17.00%
#> -------------------------------------------------
#> Sampled: 9000 of 50000, 18.00%
#> -------------------------------------------------
#> Sampled: 9500 of 50000, 19.00%
#> -------------------------------------------------
#> Sampled: 10000 of 50000, 20.00%
#> -------------------------------------------------
#> Sampled: 10500 of 50000, 21.00%
#> -------------------------------------------------
#> Sampled: 11000 of 50000, 22.00%
#> -------------------------------------------------
#> Sampled: 11500 of 50000, 23.00%
#> -------------------------------------------------
#> Sampled: 12000 of 50000, 24.00%
#> -------------------------------------------------
#> Sampled: 12500 of 50000, 25.00%
#> -------------------------------------------------
#> Sampled: 13000 of 50000, 26.00%
#> -------------------------------------------------
#> Sampled: 13500 of 50000, 27.00%
#> -------------------------------------------------
#> Sampled: 14000 of 50000, 28.00%
#> -------------------------------------------------
#> Sampled: 14500 of 50000, 29.00%
#> -------------------------------------------------
#> Sampled: 15000 of 50000, 30.00%
#> -------------------------------------------------
#> Sampled: 15500 of 50000, 31.00%
#> -------------------------------------------------
#> Sampled: 16000 of 50000, 32.00%
#> -------------------------------------------------
#> Sampled: 16500 of 50000, 33.00%
#> -------------------------------------------------
#> Sampled: 17000 of 50000, 34.00%
#> -------------------------------------------------
#> Sampled: 17500 of 50000, 35.00%
#> -------------------------------------------------
#> Sampled: 18000 of 50000, 36.00%
#> -------------------------------------------------
#> Sampled: 18500 of 50000, 37.00%
#> -------------------------------------------------
#> Sampled: 19000 of 50000, 38.00%
#> -------------------------------------------------
#> Sampled: 19500 of 50000, 39.00%
#> -------------------------------------------------
#> Sampled: 20000 of 50000, 40.00%
#> -------------------------------------------------
#> Sampled: 20500 of 50000, 41.00%
#> -------------------------------------------------
#> Sampled: 21000 of 50000, 42.00%
#> -------------------------------------------------
#> Sampled: 21500 of 50000, 43.00%
#> -------------------------------------------------
#> Sampled: 22000 of 50000, 44.00%
#> -------------------------------------------------
#> Sampled: 22500 of 50000, 45.00%
#> -------------------------------------------------
#> Sampled: 23000 of 50000, 46.00%
#> -------------------------------------------------
#> Sampled: 23500 of 50000, 47.00%
#> -------------------------------------------------
#> Sampled: 24000 of 50000, 48.00%
#> -------------------------------------------------
#> Sampled: 24500 of 50000, 49.00%
#> -------------------------------------------------
#> Sampled: 25000 of 50000, 50.00%
#> -------------------------------------------------
#> Sampled: 25500 of 50000, 51.00%
#> -------------------------------------------------
#> Sampled: 26000 of 50000, 52.00%
#> -------------------------------------------------
#> Sampled: 26500 of 50000, 53.00%
#> -------------------------------------------------
#> Sampled: 27000 of 50000, 54.00%
#> -------------------------------------------------
#> Sampled: 27500 of 50000, 55.00%
#> -------------------------------------------------
#> Sampled: 28000 of 50000, 56.00%
#> -------------------------------------------------
#> Sampled: 28500 of 50000, 57.00%
#> -------------------------------------------------
#> Sampled: 29000 of 50000, 58.00%
#> -------------------------------------------------
#> Sampled: 29500 of 50000, 59.00%
#> -------------------------------------------------
#> Sampled: 30000 of 50000, 60.00%
#> -------------------------------------------------
#> Sampled: 30500 of 50000, 61.00%
#> -------------------------------------------------
#> Sampled: 31000 of 50000, 62.00%
#> -------------------------------------------------
#> Sampled: 31500 of 50000, 63.00%
#> -------------------------------------------------
#> Sampled: 32000 of 50000, 64.00%
#> -------------------------------------------------
#> Sampled: 32500 of 50000, 65.00%
#> -------------------------------------------------
#> Sampled: 33000 of 50000, 66.00%
#> -------------------------------------------------
#> Sampled: 33500 of 50000, 67.00%
#> -------------------------------------------------
#> Sampled: 34000 of 50000, 68.00%
#> -------------------------------------------------
#> Sampled: 34500 of 50000, 69.00%
#> -------------------------------------------------
#> Sampled: 35000 of 50000, 70.00%
#> -------------------------------------------------
#> Sampled: 35500 of 50000, 71.00%
#> -------------------------------------------------
#> Sampled: 36000 of 50000, 72.00%
#> -------------------------------------------------
#> Sampled: 36500 of 50000, 73.00%
#> -------------------------------------------------
#> Sampled: 37000 of 50000, 74.00%
#> -------------------------------------------------
#> Sampled: 37500 of 50000, 75.00%
#> -------------------------------------------------
#> Sampled: 38000 of 50000, 76.00%
#> -------------------------------------------------
#> Sampled: 38500 of 50000, 77.00%
#> -------------------------------------------------
#> Sampled: 39000 of 50000, 78.00%
#> -------------------------------------------------
#> Sampled: 39500 of 50000, 79.00%
#> -------------------------------------------------
#> Sampled: 40000 of 50000, 80.00%
#> -------------------------------------------------
#> Sampled: 40500 of 50000, 81.00%
#> -------------------------------------------------
#> Sampled: 41000 of 50000, 82.00%
#> -------------------------------------------------
#> Sampled: 41500 of 50000, 83.00%
#> -------------------------------------------------
#> Sampled: 42000 of 50000, 84.00%
#> -------------------------------------------------
#> Sampled: 42500 of 50000, 85.00%
#> -------------------------------------------------
#> Sampled: 43000 of 50000, 86.00%
#> -------------------------------------------------
#> Sampled: 43500 of 50000, 87.00%
#> -------------------------------------------------
#> Sampled: 44000 of 50000, 88.00%
#> -------------------------------------------------
#> Sampled: 44500 of 50000, 89.00%
#> -------------------------------------------------
#> Sampled: 45000 of 50000, 90.00%
#> -------------------------------------------------
#> Sampled: 45500 of 50000, 91.00%
#> -------------------------------------------------
#> Sampled: 46000 of 50000, 92.00%
#> -------------------------------------------------
#> Sampled: 46500 of 50000, 93.00%
#> -------------------------------------------------
#> Sampled: 47000 of 50000, 94.00%
#> -------------------------------------------------
#> Sampled: 47500 of 50000, 95.00%
#> -------------------------------------------------
#> Sampled: 48000 of 50000, 96.00%
#> -------------------------------------------------
#> Sampled: 48500 of 50000, 97.00%
#> -------------------------------------------------
#> Sampled: 49000 of 50000, 98.00%
#> -------------------------------------------------
#> Sampled: 49500 of 50000, 99.00%
#> -------------------------------------------------
#> Sampled: 50000 of 50000, 100.00%
#> ----------------------------------------
#>  Chain 2
#> ----------------------------------------
#> Sampling ... 
#> Sampled: 500 of 50000, 1.00%
#> -------------------------------------------------
#> Sampled: 1000 of 50000, 2.00%
#> -------------------------------------------------
#> Sampled: 1500 of 50000, 3.00%
#> -------------------------------------------------
#> Sampled: 2000 of 50000, 4.00%
#> -------------------------------------------------
#> Sampled: 2500 of 50000, 5.00%
#> -------------------------------------------------
#> Sampled: 3000 of 50000, 6.00%
#> -------------------------------------------------
#> Sampled: 3500 of 50000, 7.00%
#> -------------------------------------------------
#> Sampled: 4000 of 50000, 8.00%
#> -------------------------------------------------
#> Sampled: 4500 of 50000, 9.00%
#> -------------------------------------------------
#> Sampled: 5000 of 50000, 10.00%
#> -------------------------------------------------
#> Sampled: 5500 of 50000, 11.00%
#> -------------------------------------------------
#> Sampled: 6000 of 50000, 12.00%
#> -------------------------------------------------
#> Sampled: 6500 of 50000, 13.00%
#> -------------------------------------------------
#> Sampled: 7000 of 50000, 14.00%
#> -------------------------------------------------
#> Sampled: 7500 of 50000, 15.00%
#> -------------------------------------------------
#> Sampled: 8000 of 50000, 16.00%
#> -------------------------------------------------
#> Sampled: 8500 of 50000, 17.00%
#> -------------------------------------------------
#> Sampled: 9000 of 50000, 18.00%
#> -------------------------------------------------
#> Sampled: 9500 of 50000, 19.00%
#> -------------------------------------------------
#> Sampled: 10000 of 50000, 20.00%
#> -------------------------------------------------
#> Sampled: 10500 of 50000, 21.00%
#> -------------------------------------------------
#> Sampled: 11000 of 50000, 22.00%
#> -------------------------------------------------
#> Sampled: 11500 of 50000, 23.00%
#> -------------------------------------------------
#> Sampled: 12000 of 50000, 24.00%
#> -------------------------------------------------
#> Sampled: 12500 of 50000, 25.00%
#> -------------------------------------------------
#> Sampled: 13000 of 50000, 26.00%
#> -------------------------------------------------
#> Sampled: 13500 of 50000, 27.00%
#> -------------------------------------------------
#> Sampled: 14000 of 50000, 28.00%
#> -------------------------------------------------
#> Sampled: 14500 of 50000, 29.00%
#> -------------------------------------------------
#> Sampled: 15000 of 50000, 30.00%
#> -------------------------------------------------
#> Sampled: 15500 of 50000, 31.00%
#> -------------------------------------------------
#> Sampled: 16000 of 50000, 32.00%
#> -------------------------------------------------
#> Sampled: 16500 of 50000, 33.00%
#> -------------------------------------------------
#> Sampled: 17000 of 50000, 34.00%
#> -------------------------------------------------
#> Sampled: 17500 of 50000, 35.00%
#> -------------------------------------------------
#> Sampled: 18000 of 50000, 36.00%
#> -------------------------------------------------
#> Sampled: 18500 of 50000, 37.00%
#> -------------------------------------------------
#> Sampled: 19000 of 50000, 38.00%
#> -------------------------------------------------
#> Sampled: 19500 of 50000, 39.00%
#> -------------------------------------------------
#> Sampled: 20000 of 50000, 40.00%
#> -------------------------------------------------
#> Sampled: 20500 of 50000, 41.00%
#> -------------------------------------------------
#> Sampled: 21000 of 50000, 42.00%
#> -------------------------------------------------
#> Sampled: 21500 of 50000, 43.00%
#> -------------------------------------------------
#> Sampled: 22000 of 50000, 44.00%
#> -------------------------------------------------
#> Sampled: 22500 of 50000, 45.00%
#> -------------------------------------------------
#> Sampled: 23000 of 50000, 46.00%
#> -------------------------------------------------
#> Sampled: 23500 of 50000, 47.00%
#> -------------------------------------------------
#> Sampled: 24000 of 50000, 48.00%
#> -------------------------------------------------
#> Sampled: 24500 of 50000, 49.00%
#> -------------------------------------------------
#> Sampled: 25000 of 50000, 50.00%
#> -------------------------------------------------
#> Sampled: 25500 of 50000, 51.00%
#> -------------------------------------------------
#> Sampled: 26000 of 50000, 52.00%
#> -------------------------------------------------
#> Sampled: 26500 of 50000, 53.00%
#> -------------------------------------------------
#> Sampled: 27000 of 50000, 54.00%
#> -------------------------------------------------
#> Sampled: 27500 of 50000, 55.00%
#> -------------------------------------------------
#> Sampled: 28000 of 50000, 56.00%
#> -------------------------------------------------
#> Sampled: 28500 of 50000, 57.00%
#> -------------------------------------------------
#> Sampled: 29000 of 50000, 58.00%
#> -------------------------------------------------
#> Sampled: 29500 of 50000, 59.00%
#> -------------------------------------------------
#> Sampled: 30000 of 50000, 60.00%
#> -------------------------------------------------
#> Sampled: 30500 of 50000, 61.00%
#> -------------------------------------------------
#> Sampled: 31000 of 50000, 62.00%
#> -------------------------------------------------
#> Sampled: 31500 of 50000, 63.00%
#> -------------------------------------------------
#> Sampled: 32000 of 50000, 64.00%
#> -------------------------------------------------
#> Sampled: 32500 of 50000, 65.00%
#> -------------------------------------------------
#> Sampled: 33000 of 50000, 66.00%
#> -------------------------------------------------
#> Sampled: 33500 of 50000, 67.00%
#> -------------------------------------------------
#> Sampled: 34000 of 50000, 68.00%
#> -------------------------------------------------
#> Sampled: 34500 of 50000, 69.00%
#> -------------------------------------------------
#> Sampled: 35000 of 50000, 70.00%
#> -------------------------------------------------
#> Sampled: 35500 of 50000, 71.00%
#> -------------------------------------------------
#> Sampled: 36000 of 50000, 72.00%
#> -------------------------------------------------
#> Sampled: 36500 of 50000, 73.00%
#> -------------------------------------------------
#> Sampled: 37000 of 50000, 74.00%
#> -------------------------------------------------
#> Sampled: 37500 of 50000, 75.00%
#> -------------------------------------------------
#> Sampled: 38000 of 50000, 76.00%
#> -------------------------------------------------
#> Sampled: 38500 of 50000, 77.00%
#> -------------------------------------------------
#> Sampled: 39000 of 50000, 78.00%
#> -------------------------------------------------
#> Sampled: 39500 of 50000, 79.00%
#> -------------------------------------------------
#> Sampled: 40000 of 50000, 80.00%
#> -------------------------------------------------
#> Sampled: 40500 of 50000, 81.00%
#> -------------------------------------------------
#> Sampled: 41000 of 50000, 82.00%
#> -------------------------------------------------
#> Sampled: 41500 of 50000, 83.00%
#> -------------------------------------------------
#> Sampled: 42000 of 50000, 84.00%
#> -------------------------------------------------
#> Sampled: 42500 of 50000, 85.00%
#> -------------------------------------------------
#> Sampled: 43000 of 50000, 86.00%
#> -------------------------------------------------
#> Sampled: 43500 of 50000, 87.00%
#> -------------------------------------------------
#> Sampled: 44000 of 50000, 88.00%
#> -------------------------------------------------
#> Sampled: 44500 of 50000, 89.00%
#> -------------------------------------------------
#> Sampled: 45000 of 50000, 90.00%
#> -------------------------------------------------
#> Sampled: 45500 of 50000, 91.00%
#> -------------------------------------------------
#> Sampled: 46000 of 50000, 92.00%
#> -------------------------------------------------
#> Sampled: 46500 of 50000, 93.00%
#> -------------------------------------------------
#> Sampled: 47000 of 50000, 94.00%
#> -------------------------------------------------
#> Sampled: 47500 of 50000, 95.00%
#> -------------------------------------------------
#> Sampled: 48000 of 50000, 96.00%
#> -------------------------------------------------
#> Sampled: 48500 of 50000, 97.00%
#> -------------------------------------------------
#> Sampled: 49000 of 50000, 98.00%
#> -------------------------------------------------
#> Sampled: 49500 of 50000, 99.00%
#> -------------------------------------------------
#> Sampled: 50000 of 50000, 100.00%
#> ----------------------------------------
#>  Chain 3
#> ----------------------------------------
#> Sampling ... 
#> Sampled: 500 of 50000, 1.00%
#> -------------------------------------------------
#> Sampled: 1000 of 50000, 2.00%
#> -------------------------------------------------
#> Sampled: 1500 of 50000, 3.00%
#> -------------------------------------------------
#> Sampled: 2000 of 50000, 4.00%
#> -------------------------------------------------
#> Sampled: 2500 of 50000, 5.00%
#> -------------------------------------------------
#> Sampled: 3000 of 50000, 6.00%
#> -------------------------------------------------
#> Sampled: 3500 of 50000, 7.00%
#> -------------------------------------------------
#> Sampled: 4000 of 50000, 8.00%
#> -------------------------------------------------
#> Sampled: 4500 of 50000, 9.00%
#> -------------------------------------------------
#> Sampled: 5000 of 50000, 10.00%
#> -------------------------------------------------
#> Sampled: 5500 of 50000, 11.00%
#> -------------------------------------------------
#> Sampled: 6000 of 50000, 12.00%
#> -------------------------------------------------
#> Sampled: 6500 of 50000, 13.00%
#> -------------------------------------------------
#> Sampled: 7000 of 50000, 14.00%
#> -------------------------------------------------
#> Sampled: 7500 of 50000, 15.00%
#> -------------------------------------------------
#> Sampled: 8000 of 50000, 16.00%
#> -------------------------------------------------
#> Sampled: 8500 of 50000, 17.00%
#> -------------------------------------------------
#> Sampled: 9000 of 50000, 18.00%
#> -------------------------------------------------
#> Sampled: 9500 of 50000, 19.00%
#> -------------------------------------------------
#> Sampled: 10000 of 50000, 20.00%
#> -------------------------------------------------
#> Sampled: 10500 of 50000, 21.00%
#> -------------------------------------------------
#> Sampled: 11000 of 50000, 22.00%
#> -------------------------------------------------
#> Sampled: 11500 of 50000, 23.00%
#> -------------------------------------------------
#> Sampled: 12000 of 50000, 24.00%
#> -------------------------------------------------
#> Sampled: 12500 of 50000, 25.00%
#> -------------------------------------------------
#> Sampled: 13000 of 50000, 26.00%
#> -------------------------------------------------
#> Sampled: 13500 of 50000, 27.00%
#> -------------------------------------------------
#> Sampled: 14000 of 50000, 28.00%
#> -------------------------------------------------
#> Sampled: 14500 of 50000, 29.00%
#> -------------------------------------------------
#> Sampled: 15000 of 50000, 30.00%
#> -------------------------------------------------
#> Sampled: 15500 of 50000, 31.00%
#> -------------------------------------------------
#> Sampled: 16000 of 50000, 32.00%
#> -------------------------------------------------
#> Sampled: 16500 of 50000, 33.00%
#> -------------------------------------------------
#> Sampled: 17000 of 50000, 34.00%
#> -------------------------------------------------
#> Sampled: 17500 of 50000, 35.00%
#> -------------------------------------------------
#> Sampled: 18000 of 50000, 36.00%
#> -------------------------------------------------
#> Sampled: 18500 of 50000, 37.00%
#> -------------------------------------------------
#> Sampled: 19000 of 50000, 38.00%
#> -------------------------------------------------
#> Sampled: 19500 of 50000, 39.00%
#> -------------------------------------------------
#> Sampled: 20000 of 50000, 40.00%
#> -------------------------------------------------
#> Sampled: 20500 of 50000, 41.00%
#> -------------------------------------------------
#> Sampled: 21000 of 50000, 42.00%
#> -------------------------------------------------
#> Sampled: 21500 of 50000, 43.00%
#> -------------------------------------------------
#> Sampled: 22000 of 50000, 44.00%
#> -------------------------------------------------
#> Sampled: 22500 of 50000, 45.00%
#> -------------------------------------------------
#> Sampled: 23000 of 50000, 46.00%
#> -------------------------------------------------
#> Sampled: 23500 of 50000, 47.00%
#> -------------------------------------------------
#> Sampled: 24000 of 50000, 48.00%
#> -------------------------------------------------
#> Sampled: 24500 of 50000, 49.00%
#> -------------------------------------------------
#> Sampled: 25000 of 50000, 50.00%
#> -------------------------------------------------
#> Sampled: 25500 of 50000, 51.00%
#> -------------------------------------------------
#> Sampled: 26000 of 50000, 52.00%
#> -------------------------------------------------
#> Sampled: 26500 of 50000, 53.00%
#> -------------------------------------------------
#> Sampled: 27000 of 50000, 54.00%
#> -------------------------------------------------
#> Sampled: 27500 of 50000, 55.00%
#> -------------------------------------------------
#> Sampled: 28000 of 50000, 56.00%
#> -------------------------------------------------
#> Sampled: 28500 of 50000, 57.00%
#> -------------------------------------------------
#> Sampled: 29000 of 50000, 58.00%
#> -------------------------------------------------
#> Sampled: 29500 of 50000, 59.00%
#> -------------------------------------------------
#> Sampled: 30000 of 50000, 60.00%
#> -------------------------------------------------
#> Sampled: 30500 of 50000, 61.00%
#> -------------------------------------------------
#> Sampled: 31000 of 50000, 62.00%
#> -------------------------------------------------
#> Sampled: 31500 of 50000, 63.00%
#> -------------------------------------------------
#> Sampled: 32000 of 50000, 64.00%
#> -------------------------------------------------
#> Sampled: 32500 of 50000, 65.00%
#> -------------------------------------------------
#> Sampled: 33000 of 50000, 66.00%
#> -------------------------------------------------
#> Sampled: 33500 of 50000, 67.00%
#> -------------------------------------------------
#> Sampled: 34000 of 50000, 68.00%
#> -------------------------------------------------
#> Sampled: 34500 of 50000, 69.00%
#> -------------------------------------------------
#> Sampled: 35000 of 50000, 70.00%
#> -------------------------------------------------
#> Sampled: 35500 of 50000, 71.00%
#> -------------------------------------------------
#> Sampled: 36000 of 50000, 72.00%
#> -------------------------------------------------
#> Sampled: 36500 of 50000, 73.00%
#> -------------------------------------------------
#> Sampled: 37000 of 50000, 74.00%
#> -------------------------------------------------
#> Sampled: 37500 of 50000, 75.00%
#> -------------------------------------------------
#> Sampled: 38000 of 50000, 76.00%
#> -------------------------------------------------
#> Sampled: 38500 of 50000, 77.00%
#> -------------------------------------------------
#> Sampled: 39000 of 50000, 78.00%
#> -------------------------------------------------
#> Sampled: 39500 of 50000, 79.00%
#> -------------------------------------------------
#> Sampled: 40000 of 50000, 80.00%
#> -------------------------------------------------
#> Sampled: 40500 of 50000, 81.00%
#> -------------------------------------------------
#> Sampled: 41000 of 50000, 82.00%
#> -------------------------------------------------
#> Sampled: 41500 of 50000, 83.00%
#> -------------------------------------------------
#> Sampled: 42000 of 50000, 84.00%
#> -------------------------------------------------
#> Sampled: 42500 of 50000, 85.00%
#> -------------------------------------------------
#> Sampled: 43000 of 50000, 86.00%
#> -------------------------------------------------
#> Sampled: 43500 of 50000, 87.00%
#> -------------------------------------------------
#> Sampled: 44000 of 50000, 88.00%
#> -------------------------------------------------
#> Sampled: 44500 of 50000, 89.00%
#> -------------------------------------------------
#> Sampled: 45000 of 50000, 90.00%
#> -------------------------------------------------
#> Sampled: 45500 of 50000, 91.00%
#> -------------------------------------------------
#> Sampled: 46000 of 50000, 92.00%
#> -------------------------------------------------
#> Sampled: 46500 of 50000, 93.00%
#> -------------------------------------------------
#> Sampled: 47000 of 50000, 94.00%
#> -------------------------------------------------
#> Sampled: 47500 of 50000, 95.00%
#> -------------------------------------------------
#> Sampled: 48000 of 50000, 96.00%
#> -------------------------------------------------
#> Sampled: 48500 of 50000, 97.00%
#> -------------------------------------------------
#> Sampled: 49000 of 50000, 98.00%
#> -------------------------------------------------
#> Sampled: 49500 of 50000, 99.00%
#> -------------------------------------------------
#> Sampled: 50000 of 50000, 100.00%

summary(out)
#> 
#> Call:
#> PGOcc(occ.formula = ~scale(per_tree_cov) + scale(roads) + scale(cattle), 
#>     det.formula = ~scale(effort), data = jaguar.data, n.samples = 50000, 
#>     n.report = 500, n.burn = 5000, n.thin = 2, n.chains = 3)
#> 
#> Samples per Chain: 50000
#> Burn-in: 5000
#> Thinning Rate: 2
#> Number of Chains: 3
#> Total Posterior Samples: 67500
#> Run Time (min): 0.2287
#> 
#> Occurrence (logit scale): 
#>                        Mean     SD    2.5%     50%  97.5%   Rhat   ESS
#> (Intercept)         -0.2486 1.2593 -2.2266 -0.4442 2.6174 1.0061  4874
#> scale(per_tree_cov) -0.8893 0.9672 -3.1146 -0.7718 0.7698 1.0024 11068
#> scale(roads)         0.1207 0.8498 -1.5825  0.1174 1.8627 1.0013 14604
#> scale(cattle)       -1.0751 1.2822 -3.6240 -1.0530 1.7342 1.0008 10254
#> 
#> Detection (logit scale): 
#>                  Mean     SD    2.5%     50%   97.5%   Rhat   ESS
#> (Intercept)   -2.2140 0.6158 -3.3989 -2.2259 -1.0095 1.0067  7083
#> scale(effort)  0.9322 0.6893 -0.1305  0.8327  2.5472 1.0007 15472
# Fit a spatial, single-species occupancy model.
out.sp <- spPGOcc(occ.formula = ~ scale(per_tree_cov) + scale(roads) + 
                               scale(cattle), 
             det.formula = ~ scale(effort), 
               data = jaguar.data, 
             n.neighbors = 5,
                 n.batch = 400, 
                 batch.length = 15,
               n.samples = 55000, 
               n.thin = 2, 
               n.burn = 5000, 
               n.chains = 3,
               n.report = 500)
#> ----------------------------------------
#>  Preparing to run the model
#> ----------------------------------------
#> ----------------------------------------
#>  Building the neighbor list
#> ----------------------------------------
#> ----------------------------------------
#> Building the neighbors of neighbors list
#> ----------------------------------------
#> ----------------------------------------
#>  Model description
#> ----------------------------------------
#> NNGP Spatial Occupancy model with Polya-Gamma latent
#> variable fit with 31 sites.
#> 
#> Samples per chain: 6000 (400 batches of length 15)
#> Burn-in: 5000 
#> Thinning Rate: 2 
#> Number of Chains: 3 
#> Total Posterior Samples: 1500 
#> 
#> Using the exponential spatial correlation model.
#> 
#> Using 5 nearest neighbors.
#> 
#> Source compiled with OpenMP support and model fit using 1 thread(s).
#> 
#> Adaptive Metropolis with target acceptance rate: 43.0
#> ----------------------------------------
#>  Chain 1
#> ----------------------------------------
#> Sampling ... 
#> Batch: 400 of 400, 100.00%
#> ----------------------------------------
#>  Chain 2
#> ----------------------------------------
#> Sampling ... 
#> Batch: 400 of 400, 100.00%
#> ----------------------------------------
#>  Chain 3
#> ----------------------------------------
#> Sampling ... 
#> Batch: 400 of 400, 100.00%

summary(out.sp)
#> 
#> Call:
#> spPGOcc(occ.formula = ~scale(per_tree_cov) + scale(roads) + scale(cattle), 
#>     det.formula = ~scale(effort), data = jaguar.data, n.neighbors = 5, 
#>     n.batch = 400, batch.length = 15, n.report = 500, n.burn = 5000, 
#>     n.thin = 2, n.chains = 3, n.samples = 55000)
#> 
#> Samples per Chain: 6000
#> Burn-in: 5000
#> Thinning Rate: 2
#> Number of Chains: 3
#> Total Posterior Samples: 1500
#> Run Time (min): 0.0605
#> 
#> Occurrence (logit scale): 
#>                        Mean     SD    2.5%     50%  97.5%   Rhat ESS
#> (Intercept)         -0.2767 1.2562 -2.4643 -0.3783 2.5783 1.0448 122
#> scale(per_tree_cov) -0.8451 0.9919 -2.9218 -0.8125 1.0725 1.0288 291
#> scale(roads)         0.1170 0.9297 -1.6305  0.0768 2.1198 1.0009 317
#> scale(cattle)       -1.1009 1.3345 -3.8940 -1.0727 1.7624 1.0059 239
#> 
#> Detection (logit scale): 
#>                  Mean     SD    2.5%     50%   97.5%   Rhat ESS
#> (Intercept)   -2.2583 0.5751 -3.3409 -2.2700 -1.0431 1.0413 169
#> scale(effort)  0.9603 0.7159 -0.1703  0.8562  2.7045 1.0276 469
#> 
#> Spatial Covariance: 
#>            Mean     SD   2.5%    50%  97.5%   Rhat ESS
#> sigma.sq 1.0421 1.6358 0.1743 0.5774 5.7133 1.3397  93
#> phi      0.0205 0.0111 0.0007 0.0213 0.0386 1.0001 287

Model validation

Code
# 3. Model validation -----------------------------------------------------
# Perform a posterior predictive check to assess model fit. 
ppc.out <- ppcOcc(out, fit.stat = 'freeman-tukey', group = 1)
ppc.out.sp <- ppcOcc(out.sp, fit.stat = 'freeman-tukey', group = 1)
# Calculate a Bayesian p-value as a simple measure of Goodness of Fit.
# Bayesian p-values between 0.1 and 0.9 indicate adequate model fit. 
summary(ppc.out)
#> 
#> Call:
#> ppcOcc(object = out, fit.stat = "freeman-tukey", group = 1)
#> 
#> Samples per Chain: 50000
#> Burn-in: 5000
#> Thinning Rate: 2
#> Number of Chains: 3
#> Total Posterior Samples: 67500
#> 
#> Bayesian p-value:  0.2663 
#> Fit statistic:  freeman-tukey
summary(ppc.out.sp)
#> 
#> Call:
#> ppcOcc(object = out.sp, fit.stat = "freeman-tukey", group = 1)
#> 
#> Samples per Chain: 6000
#> Burn-in: 5000
#> Thinning Rate: 2
#> Number of Chains: 3
#> Total Posterior Samples: 1500
#> 
#> Bayesian p-value:  0.252 
#> Fit statistic:  freeman-tukey

# ## see model selection as a table
# datatable( 
#   round(modSel(mods), 3)
#   )

Model comparison

Code
# 4. Model comparison -----------------------------------------------------
# Compute Widely Applicable Information Criterion (WAIC)
# Lower values indicate better model fit. 
waicOcc(out)
#>       elpd         pD       WAIC 
#> -32.522293   3.594317  72.233219
waicOcc(out.sp)
#>       elpd         pD       WAIC 
#> -31.456669   4.399425  71.712188

Best model is out.sp (out.sp) which deals with spatial autocorrelation.

Posterior summaries

Code
# 5. Posterior summaries --------------------------------------------------
# Concise summary of main parameter estimates
summary(out.sp)
#> 
#> Call:
#> spPGOcc(occ.formula = ~scale(per_tree_cov) + scale(roads) + scale(cattle), 
#>     det.formula = ~scale(effort), data = jaguar.data, n.neighbors = 5, 
#>     n.batch = 400, batch.length = 15, n.report = 500, n.burn = 5000, 
#>     n.thin = 2, n.chains = 3, n.samples = 55000)
#> 
#> Samples per Chain: 6000
#> Burn-in: 5000
#> Thinning Rate: 2
#> Number of Chains: 3
#> Total Posterior Samples: 1500
#> Run Time (min): 0.0605
#> 
#> Occurrence (logit scale): 
#>                        Mean     SD    2.5%     50%  97.5%   Rhat ESS
#> (Intercept)         -0.2767 1.2562 -2.4643 -0.3783 2.5783 1.0448 122
#> scale(per_tree_cov) -0.8451 0.9919 -2.9218 -0.8125 1.0725 1.0288 291
#> scale(roads)         0.1170 0.9297 -1.6305  0.0768 2.1198 1.0009 317
#> scale(cattle)       -1.1009 1.3345 -3.8940 -1.0727 1.7624 1.0059 239
#> 
#> Detection (logit scale): 
#>                  Mean     SD    2.5%     50%   97.5%   Rhat ESS
#> (Intercept)   -2.2583 0.5751 -3.3409 -2.2700 -1.0431 1.0413 169
#> scale(effort)  0.9603 0.7159 -0.1703  0.8562  2.7045 1.0276 469
#> 
#> Spatial Covariance: 
#>            Mean     SD   2.5%    50%  97.5%   Rhat ESS
#> sigma.sq 1.0421 1.6358 0.1743 0.5774 5.7133 1.3397  93
#> phi      0.0205 0.0111 0.0007 0.0213 0.0386 1.0001 287
# Take a look at objects in resulting object
names(out.sp)
#>  [1] "rhat"           "beta.samples"   "alpha.samples"  "theta.samples" 
#>  [5] "coords"         "z.samples"      "X"              "X.re"          
#>  [9] "w.samples"      "psi.samples"    "like.samples"   "X.p"           
#> [13] "X.p.re"         "y"              "ESS"            "call"          
#> [17] "n.samples"      "n.neighbors"    "cov.model.indx" "type"          
#> [21] "n.post"         "n.thin"         "n.burn"         "n.chains"      
#> [25] "pRE"            "psiRE"          "run.time"
str(out.sp$beta.samples)
#>  'mcmc' num [1:1500, 1:4] -0.413 0.869 1.656 1.865 -0.133 ...
#>  - attr(*, "mcpar")= num [1:3] 1 1500 1
#>  - attr(*, "dimnames")=List of 2
#>   ..$ : NULL
#>   ..$ : chr [1:4] "(Intercept)" "scale(per_tree_cov)" "scale(roads)" "scale(cattle)"
# Probability the effect of tree cover on occupancy is positive
mean(out.sp$beta.samples[, 1] > 0)
#> [1] 0.3766667
# Create simple plot summaries using MCMCvis package.
# Occupancy covariate effects ---------
MCMCplot(out.sp$beta.samples, ref_ovl = TRUE, ci = c(50, 95))

Code
# Detection covariate effects --------- 
MCMCplot(out.sp$alpha.samples, ref_ovl = TRUE, ci = c(50, 95))

look at the traceplots:

Code
MCMCtrace(out.sp$beta.samples, params = c("scale(per_tree_cov)"), type = 'trace', pdf = F, Rhat = TRUE, n.eff = TRUE)

Code

### density
MCMCtrace(out.sp$beta.samples, params = c("scale(per_tree_cov)"), ISB = FALSE, pdf = F, exact = TRUE, post_zm = TRUE, type = 'density', Rhat = TRUE, n.eff = TRUE, ind = TRUE)

Code


MCMCtrace(out.sp$theta.samples, type = 'both', pdf = F, Rhat = FALSE, n.eff = TRUE)

Look at rhat

Code
print(out.sp$rhat)
#> $beta
#> [1] 1.044793 1.028791 1.000905 1.005892
#> 
#> $alpha
#> [1] 1.041332 1.027600
#> 
#> $theta
#> [1] 1.339711 1.000113

Prediction

across per_tree_cov

Code
# 6. Prediction -----------------------------------------------------------
# Predict occupancy along a gradient of forest cover.  
# Create a set of values across the range of observed forest values
forest.pred.vals <- seq(min(jaguar.data$occ.covs$per_tree_cov), 
                              max(jaguar.data$occ.covs$per_tree_cov), 
                              length.out = 100)

# Scale predicted values by mean and standard deviation used to fit the model
forest.pred.vals.scale <- (forest.pred.vals - mean(jaguar.data$occ.covs$per_tree_cov)) / sd(jaguar.data$occ.covs$per_tree_cov)

# Predict occupancy across forest values at mean values of all other variables
pred.df <- as.matrix(data.frame(intercept = 1, forest = forest.pred.vals.scale, roads = 0, cattle = 0))

out.pred <- predict(out, pred.df)
str(out.pred)
#> List of 3
#>  $ psi.0.samples: 'mcmc' num [1:67500, 1:100] 0.96 0.426 0.11 0.734 0.76 ...
#>   ..- attr(*, "mcpar")= num [1:3] 1 67500 1
#>  $ z.0.samples  : 'mcmc' int [1:67500, 1:100] 1 0 0 1 0 0 0 1 1 0 ...
#>   ..- attr(*, "mcpar")= num [1:3] 1 67500 1
#>  $ call         : language predict.PGOcc(object = out, X.0 = pred.df)
#>  - attr(*, "class")= chr "predict.PGOcc"
psi.0.quants <- apply(out.pred$psi.0.samples, 2, quantile, 
                          prob = c(0.025, 0.5, 0.975))
psi.plot.dat <- data.frame(psi.med = psi.0.quants[2, ], 
                                 psi.low = psi.0.quants[1, ], 
                                 psi.high = psi.0.quants[3, ], 
                           forest = forest.pred.vals)
ggplot(psi.plot.dat, aes(x = forest, y = psi.med)) + 
  geom_ribbon(aes(ymin = psi.low, ymax = psi.high), fill = 'grey70') +
  geom_line() + 
  theme_bw() + 
  scale_y_continuous(limits = c(0, 1)) + 
  labs(x = 'Forest (% tree cover)', y = 'Occupancy Probability') 

Package Citation

Code
pkgs <- cite_packages(output = "paragraph", out.dir = ".") #knitr::kable(pkgs)
pkgs

We used R version 4.4.2 (R Core Team 2024) and the following R packages: bayesplot v. 1.11.1 (Gabry et al. 2019; Gabry and Mahr 2024), camtrapR v. 2.3.0 (Niedballa et al. 2016), devtools v. 2.4.5 (Wickham et al. 2022), DT v. 0.33 (Xie, Cheng, and Tan 2024), elevatr v. 0.99.0 (Hollister et al. 2023), kableExtra v. 1.4.0 (Zhu 2024), mapview v. 2.11.2 (Appelhans et al. 2023), MCMCvis v. 0.16.3 (Youngflesh 2018), patchwork v. 1.3.0 (Pedersen 2024), quarto v. 1.4.4 (Allaire and Dervieux 2024), rmarkdown v. 2.29 (Xie, Allaire, and Grolemund 2018; Xie, Dervieux, and Riederer 2020; Allaire et al. 2024), sf v. 1.0.21 (Pebesma 2018; Pebesma and Bivand 2023), spOccupancy v. 0.8.0 (Doser et al. 2022, 2024; Doser, Finley, and Banerjee 2023), styler v. 1.10.3 (Müller and Walthert 2024), terra v. 1.8.21 (Hijmans 2025), tidyverse v. 2.0.0 (Wickham et al. 2019).

Session info

Session info
#> ─ Session info ───────────────────────────────────────────────────────────────────────────────────────────────────────
#>  setting  value
#>  version  R version 4.4.2 (2024-10-31 ucrt)
#>  os       Windows 10 x64 (build 19045)
#>  system   x86_64, mingw32
#>  ui       RTerm
#>  language (EN)
#>  collate  Spanish_Colombia.utf8
#>  ctype    Spanish_Colombia.utf8
#>  tz       America/Bogota
#>  date     2025-08-06
#>  pandoc   3.2 @ C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/ (via rmarkdown)
#> 
#> ─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────
#>  ! package           * version  date (UTC) lib source
#>    abind               1.4-8    2024-09-12 [1] CRAN (R 4.4.1)
#>  D archive             1.1.12   2025-03-20 [1] CRAN (R 4.4.3)
#>    base64enc           0.1-3    2015-07-28 [1] CRAN (R 4.4.0)
#>    bayesplot         * 1.11.1   2024-02-15 [1] CRAN (R 4.4.3)
#>    bit                 4.5.0.1  2024-12-03 [1] CRAN (R 4.4.2)
#>    bit64               4.5.2    2024-09-22 [1] CRAN (R 4.4.2)
#>    boot                1.3-31   2024-08-28 [2] CRAN (R 4.4.2)
#>    brew                1.0-10   2023-12-16 [1] CRAN (R 4.4.2)
#>    bslib               0.8.0    2024-07-29 [1] CRAN (R 4.4.2)
#>    cachem              1.1.0    2024-05-16 [1] CRAN (R 4.4.2)
#>    camtrapR          * 2.3.0    2024-02-26 [1] CRAN (R 4.4.2)
#>    cellranger          1.1.0    2016-07-27 [1] CRAN (R 4.4.2)
#>    class               7.3-22   2023-05-03 [2] CRAN (R 4.4.2)
#>    classInt            0.4-10   2023-09-05 [1] CRAN (R 4.4.2)
#>    cli                 3.6.3    2024-06-21 [1] CRAN (R 4.4.2)
#>    coda                0.19-4.1 2024-01-31 [1] CRAN (R 4.4.2)
#>    codetools           0.2-20   2024-03-31 [2] CRAN (R 4.4.2)
#>    colorspace          2.1-1    2024-07-26 [1] CRAN (R 4.4.2)
#>    crayon              1.5.3    2024-06-20 [1] CRAN (R 4.4.2)
#>    crosstalk           1.2.1    2023-11-23 [1] CRAN (R 4.4.2)
#>    data.table          1.16.4   2024-12-06 [1] CRAN (R 4.4.2)
#>    DBI                 1.2.3    2024-06-02 [1] CRAN (R 4.4.2)
#>    devtools            2.4.5    2022-10-11 [1] CRAN (R 4.4.2)
#>    digest              0.6.37   2024-08-19 [1] CRAN (R 4.4.2)
#>    doParallel          1.0.17   2022-02-07 [1] CRAN (R 4.4.2)
#>    dplyr             * 1.1.4    2023-11-17 [1] CRAN (R 4.4.2)
#>    DT                * 0.33     2024-04-04 [1] CRAN (R 4.4.2)
#>    e1071               1.7-16   2024-09-16 [1] CRAN (R 4.4.2)
#>    elevatr           * 0.99.0   2023-09-12 [1] CRAN (R 4.4.2)
#>    ellipsis            0.3.2    2021-04-29 [1] CRAN (R 4.4.2)
#>    evaluate            1.0.1    2024-10-10 [1] CRAN (R 4.4.2)
#>    farver              2.1.2    2024-05-13 [1] CRAN (R 4.4.2)
#>    fastmap             1.2.0    2024-05-15 [1] CRAN (R 4.4.2)
#>    forcats           * 1.0.0    2023-01-29 [1] CRAN (R 4.4.2)
#>    foreach             1.5.2    2022-02-02 [1] CRAN (R 4.4.2)
#>    fs                  1.6.5    2024-10-30 [1] CRAN (R 4.4.2)
#>    generics            0.1.3    2022-07-05 [1] CRAN (R 4.4.2)
#>    ggplot2           * 3.5.2    2025-04-09 [1] CRAN (R 4.4.3)
#>    glue                1.8.0    2024-09-30 [1] CRAN (R 4.4.2)
#>    grateful          * 0.2.10   2024-09-04 [1] CRAN (R 4.4.2)
#>    gtable              0.3.6    2024-10-25 [1] CRAN (R 4.4.2)
#>    hms                 1.1.3    2023-03-21 [1] CRAN (R 4.4.2)
#>    htmltools           0.5.8.1  2024-04-04 [1] CRAN (R 4.4.2)
#>    htmlwidgets         1.6.4    2023-12-06 [1] CRAN (R 4.4.2)
#>    httpuv              1.6.15   2024-03-26 [1] CRAN (R 4.4.2)
#>    iterators           1.0.14   2022-02-05 [1] CRAN (R 4.4.2)
#>    jquerylib           0.1.4    2021-04-26 [1] CRAN (R 4.4.2)
#>    jsonlite            1.8.9    2024-09-20 [1] CRAN (R 4.4.2)
#>    kableExtra        * 1.4.0    2024-01-24 [1] CRAN (R 4.4.2)
#>    KernSmooth          2.23-24  2024-05-17 [2] CRAN (R 4.4.2)
#>    knitr               1.49     2024-11-08 [1] CRAN (R 4.4.2)
#>    labeling            0.4.3    2023-08-29 [1] CRAN (R 4.4.0)
#>    later               1.3.2    2023-12-06 [1] CRAN (R 4.4.2)
#>    lattice             0.22-6   2024-03-20 [2] CRAN (R 4.4.2)
#>    leafem              0.2.4    2025-05-01 [1] CRAN (R 4.4.3)
#>    leaflet             2.2.2    2024-03-26 [1] CRAN (R 4.4.2)
#>    leaflet.providers   2.0.0    2023-10-17 [1] CRAN (R 4.4.2)
#>    leafpop             0.1.0    2021-05-22 [1] CRAN (R 4.4.2)
#>    lifecycle           1.0.4    2023-11-07 [1] CRAN (R 4.4.2)
#>    lme4                1.1-35.5 2024-07-03 [1] CRAN (R 4.4.2)
#>    lubridate         * 1.9.4    2024-12-08 [1] CRAN (R 4.4.2)
#>    magrittr            2.0.3    2022-03-30 [1] CRAN (R 4.4.2)
#>    mapview           * 2.11.2   2023-10-13 [1] CRAN (R 4.4.2)
#>    MASS                7.3-61   2024-06-13 [2] CRAN (R 4.4.2)
#>    Matrix              1.7-1    2024-10-18 [2] CRAN (R 4.4.2)
#>    MCMCvis           * 0.16.3   2023-10-17 [1] CRAN (R 4.4.3)
#>    memoise             2.0.1    2021-11-26 [1] CRAN (R 4.4.2)
#>    mgcv                1.9-1    2023-12-21 [2] CRAN (R 4.4.2)
#>    mime                0.12     2021-09-28 [1] CRAN (R 4.4.0)
#>    miniUI              0.1.1.1  2018-05-18 [1] CRAN (R 4.4.2)
#>    minqa               1.2.8    2024-08-17 [1] CRAN (R 4.4.2)
#>    munsell             0.5.1    2024-04-01 [1] CRAN (R 4.4.2)
#>    mvtnorm             1.3-2    2024-11-04 [1] CRAN (R 4.4.2)
#>    nlme                3.1-166  2024-08-14 [2] CRAN (R 4.4.2)
#>    nloptr              2.1.1    2024-06-25 [1] CRAN (R 4.4.2)
#>    patchwork         * 1.3.0    2024-09-16 [1] CRAN (R 4.4.2)
#>    pillar              1.10.1   2025-01-07 [1] CRAN (R 4.4.2)
#>    pkgbuild            1.4.8    2025-05-26 [1] CRAN (R 4.4.3)
#>    pkgconfig           2.0.3    2019-09-22 [1] CRAN (R 4.4.2)
#>    pkgload             1.4.0    2024-06-28 [1] CRAN (R 4.4.2)
#>    png                 0.1-8    2022-11-29 [1] CRAN (R 4.4.0)
#>    processx            3.8.4    2024-03-16 [1] CRAN (R 4.4.2)
#>    profvis             0.4.0    2024-09-20 [1] CRAN (R 4.4.2)
#>    progressr           0.15.0   2024-10-29 [1] CRAN (R 4.4.2)
#>    promises            1.3.0    2024-04-05 [1] CRAN (R 4.4.2)
#>    proxy               0.4-27   2022-06-09 [1] CRAN (R 4.4.2)
#>    ps                  1.8.1    2024-10-28 [1] CRAN (R 4.4.2)
#>    purrr             * 1.0.2    2023-08-10 [1] CRAN (R 4.4.2)
#>    quarto            * 1.4.4    2024-07-20 [1] CRAN (R 4.4.2)
#>    R.cache             0.16.0   2022-07-21 [1] CRAN (R 4.4.2)
#>    R.methodsS3         1.8.2    2022-06-13 [1] CRAN (R 4.4.0)
#>    R.oo                1.27.0   2024-11-01 [1] CRAN (R 4.4.1)
#>    R.utils             2.12.3   2023-11-18 [1] CRAN (R 4.4.2)
#>    R6                  2.6.1    2025-02-15 [1] CRAN (R 4.4.2)
#>    RANN                2.6.2    2024-08-25 [1] CRAN (R 4.4.3)
#>    raster              3.6-30   2024-10-02 [1] CRAN (R 4.4.2)
#>    Rcpp                1.0.13-1 2024-11-02 [1] CRAN (R 4.4.2)
#>    RcppNumerical       0.6-0    2023-09-06 [1] CRAN (R 4.4.2)
#>  D RcppParallel        5.1.9    2024-08-19 [1] CRAN (R 4.4.2)
#>    readr             * 2.1.5    2024-01-10 [1] CRAN (R 4.4.2)
#>    readxl            * 1.4.3    2023-07-06 [1] CRAN (R 4.4.2)
#>    remotes             2.5.0    2024-03-17 [1] CRAN (R 4.4.2)
#>    renv                1.0.11   2024-10-12 [1] CRAN (R 4.4.2)
#>    rlang               1.1.4    2024-06-04 [1] CRAN (R 4.4.2)
#>    rmarkdown           2.29     2024-11-04 [1] CRAN (R 4.4.2)
#>    rstudioapi          0.17.1   2024-10-22 [1] CRAN (R 4.4.2)
#>    sass                0.4.9    2024-03-15 [1] CRAN (R 4.4.2)
#>    satellite           1.0.5    2024-02-10 [1] CRAN (R 4.4.2)
#>    scales              1.3.0    2023-11-28 [1] CRAN (R 4.4.2)
#>    secr                5.1.0    2024-11-04 [1] CRAN (R 4.4.2)
#>    sessioninfo         1.2.2    2021-12-06 [1] CRAN (R 4.4.2)
#>    sf                * 1.0-21   2025-05-15 [1] CRAN (R 4.4.3)
#>    shiny               1.9.1    2024-08-01 [1] CRAN (R 4.4.2)
#>    sp                  2.1-4    2024-04-30 [1] CRAN (R 4.4.2)
#>    spAbundance         0.2.1    2024-10-05 [1] CRAN (R 4.4.3)
#>    spOccupancy       * 0.8.0    2024-12-14 [1] CRAN (R 4.4.3)
#>    stringi             1.8.4    2024-05-06 [1] CRAN (R 4.4.0)
#>    stringr           * 1.5.1    2023-11-14 [1] CRAN (R 4.4.2)
#>    styler            * 1.10.3   2024-04-07 [1] CRAN (R 4.4.2)
#>    svglite             2.1.3    2023-12-08 [1] CRAN (R 4.4.2)
#>    systemfonts         1.1.0    2024-05-15 [1] CRAN (R 4.4.2)
#>    terra             * 1.8-21   2025-02-10 [1] CRAN (R 4.4.2)
#>    tibble            * 3.2.1    2023-03-20 [1] CRAN (R 4.4.2)
#>    tidyr             * 1.3.1    2024-01-24 [1] CRAN (R 4.4.2)
#>    tidyselect          1.2.1    2024-03-11 [1] CRAN (R 4.4.2)
#>    tidyverse         * 2.0.0    2023-02-22 [1] CRAN (R 4.4.2)
#>    timechange          0.3.0    2024-01-18 [1] CRAN (R 4.4.2)
#>    tzdb                0.4.0    2023-05-12 [1] CRAN (R 4.4.2)
#>    units               0.8-5    2023-11-28 [1] CRAN (R 4.4.2)
#>    urlchecker          1.0.1    2021-11-30 [1] CRAN (R 4.4.2)
#>    usethis             3.0.0    2024-07-29 [1] CRAN (R 4.4.1)
#>    uuid                1.2-1    2024-07-29 [1] CRAN (R 4.4.1)
#>    vctrs               0.6.5    2023-12-01 [1] CRAN (R 4.4.2)
#>    viridisLite         0.4.2    2023-05-02 [1] CRAN (R 4.4.2)
#>    vroom               1.6.5    2023-12-05 [1] CRAN (R 4.4.2)
#>    withr               3.0.2    2024-10-28 [1] CRAN (R 4.4.2)
#>    xfun                0.49     2024-10-31 [1] CRAN (R 4.4.2)
#>    xml2                1.3.6    2023-12-04 [1] CRAN (R 4.4.2)
#>    xtable              1.8-4    2019-04-21 [1] CRAN (R 4.4.2)
#>    yaml                2.3.10   2024-07-26 [1] CRAN (R 4.4.1)
#> 
#>  [1] C:/Users/usuario/AppData/Local/R/win-library/4.4
#>  [2] C:/Program Files/R/R-4.4.2/library
#> 
#>  D ── DLL MD5 mismatch, broken installation.
#> 
#> ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

References

Allaire, JJ, and Christophe Dervieux. 2024. quarto: R Interface to Quarto Markdown Publishing System. https://CRAN.R-project.org/package=quarto.
Allaire, JJ, Yihui Xie, Christophe Dervieux, Jonathan McPherson, Javier Luraschi, Kevin Ushey, Aron Atkins, et al. 2024. rmarkdown: Dynamic Documents for r. https://github.com/rstudio/rmarkdown.
Appelhans, Tim, Florian Detsch, Christoph Reudenbach, and Stefan Woellauer. 2023. mapview: Interactive Viewing of Spatial Data in r. https://CRAN.R-project.org/package=mapview.
Doser, Jeffrey W., Andrew O. Finley, and Sudipto Banerjee. 2023. “Joint Species Distribution Models with Imperfect Detection for High-Dimensional Spatial Data.” Ecology, e4137. https://doi.org/10.1002/ecy.4137.
Doser, Jeffrey W., Andrew O. Finley, Marc Kéry, and Elise F. Zipkin. 2022. spOccupancy: An r Package for Single-Species, Multi-Species, and Integrated Spatial Occupancy Models.” Methods in Ecology and Evolution 13: 1670–78. https://doi.org/10.1111/2041-210X.13897.
Doser, Jeffrey W., Andrew O. Finley, Sarah P. Saunders, Marc Kéry, Aaron S. Weed, and Elise F. Zipkin. 2024. “Modeling Complex Species-Environment Relationships Through Spatially-Varying Coefficient Occupancy Models.” Journal of Agricultural, Biological, and Environmental Statistics. https://doi.org/10.1007/s13253-023-00595-6.
Gabry, Jonah, and Tristan Mahr. 2024. bayesplot: Plotting for Bayesian Models.” https://mc-stan.org/bayesplot/.
Gabry, Jonah, Daniel Simpson, Aki Vehtari, Michael Betancourt, and Andrew Gelman. 2019. “Visualization in Bayesian Workflow.” J. R. Stat. Soc. A 182: 389–402. https://doi.org/10.1111/rssa.12378.
Gilbert, Marius, Gaëlle Nicolas, Giusepina Cinardi, Thomas P Van Boeckel, Sophie O Vanwambeke, G R William Wint, and Timothy P Robinson. 2018. Global distribution data for cattle, buffaloes, horses, sheep, goats, pigs, chickens and ducks in 2010.” Scientific Data 5 (1): 180227. https://doi.org/10.1038/sdata.2018.227.
Hijmans, Robert J. 2025. terra: Spatial Data Analysis. https://CRAN.R-project.org/package=terra.
Hollister, Jeffrey, Tarak Shah, Jakub Nowosad, Alec L. Robitaille, Marcus W. Beck, and Mike Johnson. 2023. elevatr: Access Elevation Data from Various APIs. https://doi.org/10.5281/zenodo.8335450.
Meijer, Johan R, Mark A J Huijbregts, Kees C G J Schotten, and Aafke M Schipper. 2018. Global patterns of current and future road infrastructure.” Environmental Research Letters 13 (6): 64006. https://doi.org/10.1088/1748-9326/aabd42.
Müller, Kirill, and Lorenz Walthert. 2024. styler: Non-Invasive Pretty Printing of r Code. https://CRAN.R-project.org/package=styler.
Niedballa, Jürgen, Rahel Sollmann, Alexandre Courtiol, and Andreas Wilting. 2016. camtrapR: An r Package for Efficient Camera Trap Data Management.” Methods in Ecology and Evolution 7 (12): 1457–62. https://doi.org/10.1111/2041-210X.12600.
Pebesma, Edzer. 2018. Simple Features for R: Standardized Support for Spatial Vector Data.” The R Journal 10 (1): 439–46. https://doi.org/10.32614/RJ-2018-009.
Pebesma, Edzer, and Roger Bivand. 2023. Spatial Data Science: With applications in R. Chapman and Hall/CRC. https://doi.org/10.1201/9780429459016.
Pedersen, Thomas Lin. 2024. patchwork: The Composer of Plots. https://CRAN.R-project.org/package=patchwork.
R Core Team. 2024. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy D’Agostino McGowan, Romain François, Garrett Grolemund, et al. 2019. “Welcome to the tidyverse.” Journal of Open Source Software 4 (43): 1686. https://doi.org/10.21105/joss.01686.
Wickham, Hadley, Jim Hester, Winston Chang, and Jennifer Bryan. 2022. devtools: Tools to Make Developing r Packages Easier. https://CRAN.R-project.org/package=devtools.
Xie, Yihui, J. J. Allaire, and Garrett Grolemund. 2018. R Markdown: The Definitive Guide. Boca Raton, Florida: Chapman; Hall/CRC. https://bookdown.org/yihui/rmarkdown.
Xie, Yihui, Joe Cheng, and Xianying Tan. 2024. DT: A Wrapper of the JavaScript Library DataTables. https://CRAN.R-project.org/package=DT.
Xie, Yihui, Christophe Dervieux, and Emily Riederer. 2020. R Markdown Cookbook. Boca Raton, Florida: Chapman; Hall/CRC. https://bookdown.org/yihui/rmarkdown-cookbook.
Youngflesh, Casey. 2018. MCMCvis: Tools to Visualize, Manipulate, and Summarize MCMC Output.” Journal of Open Source Software 3 (24): 640. https://doi.org/10.21105/joss.00640.
Zhu, Hao. 2024. kableExtra: Construct Complex Table with kable and Pipe Syntax. https://CRAN.R-project.org/package=kableExtra.

Citation

BibTeX citation:
@online{j._lizcano2025,
  author = {J. Lizcano, Diego and F. González-Maya, José},
  title = {Spatial, Single-Species Occupancy Model},
  date = {2025-06-01},
  url = {https://dlizcano.github.io/cametrap/posts/2025-06-01-spatial-single-species-occupancy/},
  langid = {en}
}
For attribution, please cite this work as:
J. Lizcano, Diego, and José F. González-Maya. 2025. “Spatial, Single-Species Occupancy Model.” June 1, 2025. https://dlizcano.github.io/cametrap/posts/2025-06-01-spatial-single-species-occupancy/.