16 
Fishery Bulletin 115(1) 
Table 1 
Elicited prior specifications for log-normal and log-skew-f models used to examine a 
Bayesian analysis of the von Bertalanffy growth function. ri¥(o,.=„)(0, 100) represents the 
MOjlOOl-density truncated at (0,=»), and T£(2^„){0.5) denotes the exponential density trun- 
cated at the (2,oo) interval. The parameters are the asymptotic length growth rate 
coefficient (K), theoretical age in years when the length is zero (-to), heteroscedastieity 
(p), inverted dispersion skewness (1), and degrees of freedom (v). 
Parameter 
Log-normal (type I) 
Log-normal (type II) 
Log-skew-^ 
L„(cm) 
niL„) 1 
7W(o,„)(0,100) 
ri¥(o,„)(0,100) 
«y-i) 
Gamma( 15, 100) 
Gamma(15, 100) 
Gam?na(15,100) 
-^o(y) 
Gamma{10,4) 
Gamma{10,4) 
Gamma{10,4) 
P 
- 
- 
n(p) oc 1 
GammaiO. 1,0.1) 
GammaiO. 1,0.1) 
Gamma(0. 1,0.1) 
1 
- 
- 
M15,100) 
V 
- 
- 
7W(2,oo)(0.5) 
ferred to as type-I model) was similar to that developed 
by Siegfried and Sanso (2006), and the second one in- 
cluded a modification of the prior distribution of so 
that it was the same as that proposed in the log-skew-^ 
model. All these prior specifications are summarized in 
Table 1. The following models were considered: 
WA/C = Er=ilog 
i£log/(yn^i,^s) 
8=1 
■PwAIC- (14) 
Also, the WAIC is related to the effective number of 
parameters: 
PWAIC — 
• Log-normal (type I) with constant variance function; 
• Log-normal (type II) with constant variance function; 
• Log-skew-^ with constant variance function; 
• Log-skew-^ with exponential variance function; 
• Log-skew-^ with power variance function. 
Selecting the “best” model is an important aspect in 
statistical analysis. In the rest of this section, we de- 
scribe how we implemented the deviance information 
criterion (DIG) and the widely applicable information 
criterion (WAIC) for model selection. 
2Er.i 
log 
S =1 
i E log /( ) - i E log /(y.'l ) 
S =1 
. (15) 
Compared with DIG, WAIC has the property of averag- 
ing over the posterior density by using each iterated 0s, 
instead of being replaced by the mean 0. In addition, 
jowAic is more numerically stable than poic because it 
averages separately for each observation y[ (Gelman 
et al., 2014). 
Influential analysis 
Deviance information criterion The DIG proposed by 
Spiegelhalter et al. (2002) is based on the posterior 
mean of the deviance, and it can be approximated by 
the MCMC algorithm as follows: 
Die =2 y:i^ 
log f{y[ i , 0 ) - A £ log f{y[ | X; , 0^ ) 
S =1 
(12) 
where ^=-^Ef=i^s is the mean of a sample 0|,...,0 b 
obtained from the posterior distribution 7r(0|(5). 
The DIG is related to the effective number of para- 
meters: 
Pdic — 2 
log f{j' I X, 0 ) - i £ log f{j' I X, 03) 
S = 1 
(13) 
The widely applicable information criterion The WAIC 
(e.g., Gelman et al., 2014) is based on the computed 
log-pointwise-posterior-predictive density, complement- 
ed by a correction for the effective number of para- 
meters to adjust for overfitting: 
The statistical stability of the proposed models exposed 
to perturbations of the data were analyzed by using in- 
fluential analysis. We considered the Kullback-Leibler 
(KL) divergence measure (Kullback and Leibler, 1951) 
to quantify the effect on the inferences produced by 
excluding one observation or a group of observations 
from the full data set. The KL-divergence had been 
considered previously in Bayesian influential analysis 
for elliptical and skew-elliptical models (Arellano-Valle 
et al., 2000; Vidal et al., 2006). 
We let P = 71(0 1 S) and iA; = 7r(0 1 S_j) be the posterior 
distribution of 9 obtained from the full data S = (x, y') 
and the data without the ith observation <§_; = (^j, y'_i), 
respectively. The KL-divergence between P and P_i was 
given by 
X(P,i>_i) = /i(e|S)iog{,^)d». (16) 
To identify influential observations, Peng and Dey 
(1995) showed that if pi » 1/2, where 
(17) 
