Post your second assignment by replying to this topic
This week we did the first and most simple occupancy model, adding no covariates, and considering detectability and occupancy are constant (i.e., do not vary). Later, we extracted estimates, standard error and confidence intervals values for both occupancy and detectability from the model.
This was just a small sub-set of my data (one season and one location with ~20 cameras), and only looking at spotted hyena. Not surprisingly for this particular conservancy and species, the occupancy is very high at around 0.94 - much higher than p at around 0.24.
In this week we started our modelling by running a single season single species model, which gave us a estimation of the occupancy probalibility and detectability for the cloud leopard data, but it has not considered site and surveys covariates yet. We calculated probabilities, standard errors and confidence intervals for this singlest model.
From our detection history data we could also estimate the naive occupancy, which is the probability of occupancy without considering detectability.
We also learned key concepts for the modelling approach we are conducting, such as Probability versus Likelihood, model versus reality and testing hypothesis.
Probability and likelihood seem similar, but there is a fine distinction. In the past I have struggled to understand the difference, so for my assignment, I’ll briefly explain the difference between the two concepts. In a sense, they are two different ways of viewing a test of a hypothesis. Probability assumes that the hypothesis is true, and estimates how likely your data would be to occur under that hypothesis. Simple frequentist statistics tests like t-tests are a good example. These assume a null hypothesis of no effect and estimate whether the observed data could reasonably occur under those conditions. Generally, researchers use a cutoff of 0.05—the null hypothesis is rejected if the data has less than a 5% chance of occurring assuming the null is true. Likelihood turns that on its head by instead estimating how likely it is for the hypothesis to be true, given the observed data. For instance, we might hypothesize that a coin is fair, but if it is tossed 100 times and lands on heads only 20% of the time, we can use this data to directly infer that the hypothesis is unlikely.
Hope this makes sense! I’m still wrapping my head around this concept, so I welcome any feedback or corrections!
Hi, I will try to explain the concept of “model” with my own words and cats.
let me know your feedback!
The most useful things I have learned in this week includes making a separate R project, the importance of creating unmarked frame for working with unmarked package, the difference between occupancy models and other R models (for example usage of two parameters simultaneously; occupancy and detectability), arrangements of occupancy function, calculating confidence intervals, brushed up many statistical concepts like probability and likelihood, null hypothesis and models.
Hello, so far I understand much better the different concepts used occupancy, detection, hypothesis, covariance and models. I already see clearly the role of occupancy in the management of biodiversity. In other side, also with this module on analysis, which I kept on repeating, I think I understand better the approach for the determination of occupancy and I am already quite familiar with R of course with the advice of Lucy.
So far, I have learnt a simple occupancy analyze of the clouded leopard data. To see the confident of the detection of the clouded leopard is nice to learn it more.
This module taught me how to run an occupancy model and calculate confidence intervals. I also learnt the difference between probability and likelihood, as well as how they relate to my statistical model and results. This was something I had difficulty with before, so it was good to go through it again in theory and practice.
The clouded leopard data allowed us to estimate the probability of occupancy and detectability over a single season.
The week started with data preparation, importing and viewing data in R and preparation of some R codes for single season occupancy model. We converted out detection histories object into correct formats for unmarked packages to work.
We also calculated probabilities, standard errors and confidence intervals. We also looked at probability vs likelihood, models and realities and hypothesis testing
I have learned how to run a default single season, single species occupancy model in R software on a demo dataset based on clouded leopards, using the unmarked package. After getting an estimation of grid cells occupied by clouded leopards, we calculated a 95% confidence interval of grid cells occupied by Clouded leopards, then we did the same to evaluate the detectability. We learned about the Information Theoretic Approach, which is used to evaluate the strength of evidence in support of each of our hypotheses. Then finally, we learned what a model is and how to identify the one that best fits our data.
I have successfully run the basic single-species single-season occupancy model using my data on Egyptian Vulture in western Himalaya. I was able to import and view the data in R, run the default occupancy model and view the results. The occupancy of Egyptian vulture in my study area is 0.67 (CI= 0.54 - 0.78) and the detectability is 55% (± 0.04% SE), with a confidence interval of 47-63%.
Attached below is a screen shot of my analysis so far.