{"id":627,"date":"2015-04-17T08:04:18","date_gmt":"2015-04-17T08:04:18","guid":{"rendered":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/?p=627"},"modified":"2025-02-26T13:21:38","modified_gmt":"2025-02-26T13:21:38","slug":"mcmcglmm-in-stata-part-i","status":"publish","type":"post","link":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2015\/04\/17\/mcmcglmm-in-stata-part-i\/","title":{"rendered":"MCMCglmm in Stata (Part I)"},"content":{"rendered":"<p>I always advise\u00a0students not to tie themselves to one piece of statistical software, even when it is as good as Stata. It is inevitable that whichever statistical package\u00a0they choose, there will be tasks that are easier in another program or analyses that\u00a0their favourite does not offer.<\/p>\n<p>The problem with this advice is that it takes time to develop advanced skills in using statistical software and many people do not have the time or inclination to learn a second program.<\/p>\n<p>The solution is to develop methods that enable us\u00a0to work primarily in Stata, but to export jobs to other packages when necessary and then to read the results into Stata for further investigation. This is the method that we use for running Bayesian analyses\u00a0by combining Stata and\u00a0WinBUGS and I have discussed\u00a0in past postings,\u00a0how we can call R from Stata to take advantage of the many analyses available in R but not in Stata.<\/p>\n<p>In an ideal world, Stata would make it\u00a0automatic for you to export a job and to import the results but unfortunately it does not, perhaps for obvious\u00a0economic reasons. None the less it is not difficult to implement such a scheme for ourselves.<\/p>\n<p>R is organized in packages, which are collections of related programs and data. There is a particularly good package called <strong>MCMCglmm<\/strong> that fits Bayesian generalized linear mixed models and since there is nothing similar in Stata, a sensible option would be to send\u00a0the Poisson regression problem, which\u00a0I have been analysing in recent postings, to R.<\/p>\n<p>There are two ways of implementing a Stata-R link; if you know the R language then you can code the R syntax directly into a commented-out section within your do file and export it to R in just the same way that we export a WinBUGS model file; on the other hand, if you want the R package to be available to users who only know Stata, then you have to write a Stata ado file that writes the R code for them.<\/p>\n<p>It is instructive to start with a program that assumes some knowledge of R;\u00a0next week\u00a0we will see how we can hide the complexity. I will assume that you have R installed on your computer and that you have installed the package MCMCglmm by downloading it over the internet. This installation\u00a0will also be hidden from the user in next week&#8217;s version.<\/p>\n<p>I&#8217;ve discussed using a Stata-R link before in this blog and I am going to follow the same approach. You might find it helpful to read my original postings on this topic as they cover some of the basics in more detail. You can find them at,<br \/>\n<a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/09\/19\/using-r-with-stata-part-i\/\">https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/09\/19\/using-r-with-stata-part-i\/<\/a><br \/>\n<a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/09\/26\/using-r-with-stata-part-ii\/\">https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/09\/26\/using-r-with-stata-part-ii\/<\/a><br \/>\n<a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/10\/03\/using-r-with-stata-part-iii\/\">https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/10\/03\/using-r-with-stata-part-iii\/<\/a><br \/>\n<a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/10\/10\/using-r-with-stata-part-iv\/\">https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/10\/10\/using-r-with-stata-part-iv\/<\/a><\/p>\n<p>The data for our Poisson regression model describe the numbers of seizures experienced by 59 epilepsy patients in each of 4 consecutive 2 weeks periods. There are\u00a0five explanatory variables and two random effects; one with a different level for each subject and one with a different level for each subject in each week. The\u00a0problem was\u00a0first introduced to this blog back in February <a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2015\/02\/06\/poisson-regression-with-two-random-effects\/\">https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2015\/02\/06\/poisson-regression-with-two-random-effects\/<\/a><\/p>\n<p><strong>MCMCglmm<\/strong> wants the data stacked into long columns with the four visits stored under one another. We could re-organize the data in R (R has a very neat package called <strong>dplyr<\/strong> that is designed specifically for data organization) but as we are primarily using Stata, we will prepare the data in Stata so that it is already in the format that MCMCglmm needs.<\/p>\n<p>Here is the Stata code that centres the data, stacks it and saves it. In fact, MCMCglmm does not require us to centre the variables but, if I did not, then I would have to change the priors that I used in previous postings, so I have decided to centre for consistency.<\/p>\n<p><span style=\"color: #0000ff\">use epilepsy.dta , clear<\/span><br \/>\n<span style=\"color: #0000ff\"> gen LB = log(Base\/4)<\/span><br \/>\n<span style=\"color: #0000ff\"> qui su LB<\/span><br \/>\n<span style=\"color: #0000ff\"> gen cLB = LB &#8211; r(mean)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen lnAge = log(Age)<\/span><br \/>\n<span style=\"color: #0000ff\"> qui su lnAge<\/span><br \/>\n<span style=\"color: #0000ff\"> gen clnAge = lnAge &#8211; r(mean)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen BXT = LB*Trt<\/span><br \/>\n<span style=\"color: #0000ff\"> qui su BXT<\/span><br \/>\n<span style=\"color: #0000ff\"> qui gen cBXT = BXT &#8211; r(mean)<\/span><br \/>\n<span style=\"color: #0000ff\"> qui su Trt<\/span><br \/>\n<span style=\"color: #0000ff\"> gen cTrt = Trt &#8211; r(mean)<\/span><\/p>\n<p><span style=\"color: #0000ff\">gen id = _n<\/span><br \/>\n<span style=\"color: #0000ff\"> gen zero = 0<\/span><br \/>\n<span style=\"color: #0000ff\"> gen one = 1<\/span><br \/>\n<span style=\"color: #0000ff\"> stack id y_1 cLB clnAge cTrt cBXT zero id y_2 cLB clnAge cTrt cBXT zero \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 id y_3 cLB clnAge cTrt cBXT zero id y_4 cLB clnAge cTrt cBXT one , \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 into(subject y base age trt bxt v4)<\/span><br \/>\n<span style=\"color: #0000ff\"> rename _stack visit<\/span><br \/>\n<span style=\"color: #0000ff\"> sort visit subject<\/span><br \/>\n<span style=\"color: #0000ff\"> save epilepsyforR.dta , replace version(12)<\/span><\/p>\n<p>Notice that the data are stored in Stata 12\u00a0format, R cannot read Stata 13 or 14. Now all we need to do is to read these data\u00a0into R and run MCMCglmm. Here is\u00a0the R code that we need. To help avoid confusion, I will show R commands and output in green and Stata code and its output in blue.<\/p>\n<p><span style=\"color: #008000\">library(foreign)<\/span><br \/>\n<span style=\"color: #008000\"> library(MCMCglmm)<\/span><br \/>\n<span style=\"color: #008000\"> D &lt;- read.dta(file=&#8221;K\/epilepsy \/epilepsyforR.dta&#8221;)<\/span><br \/>\n<span style=\"color: #008000\"> prior &lt;- list( B=list(mu=c(2.6,1,0,0,0,0),V=diag(1,6,6)),<br \/>\nG=list( G1=list(V=2\/7,n=16) ),R=list(V=2\/7,n=16) )<\/span><br \/>\n<span style=\"color: #008000\"> fit &lt;- MCMCglmm(y ~ base+trt+bxt+age+v4,random= ~subject,<\/span><br \/>\n<span style=\"color: #008000\"> \u00a0\u00a0\u00a0\u00a0 prior=prior,family=&#8221;poisson&#8221;,nitt=55000,burnin=5000,thin=5,data=D)<\/span><br \/>\n<span style=\"color: #008000\"> summary(fit)<\/span><\/p>\n<p>The <strong>library()<\/strong> command (require() is an alternative that does much the same thing) loads two packages; <strong>foreign<\/strong>, which reads data files created by other programs and <strong>MCMCglmm<\/strong>, which provides our analysis.<\/p>\n<p>This R code\u00a0reads the Stata data file and stores the data under the name <strong>D<\/strong>.\u00a0 Then\u00a0it defines the prior and runs the analysis storing the results under the name <strong>fit<\/strong>. The summary command prints a table of the estimates. In this case we get<\/p>\n<pre><strong><span style=\"color: #008000\">DIC: 1157.116<\/span><\/strong><\/pre>\n<pre><strong><span style=\"color: #008000\">G-structure: ~subject<\/span><\/strong><\/pre>\n<pre><strong><span style=\"color: #008000\">post.mean l-95% CI u-95% CI eff.samp\r\n subject 0.2607 0.1566 0.3778 4693<\/span><\/strong><\/pre>\n<pre><strong><span style=\"color: #008000\">R-structure: ~units<\/span><\/strong><\/pre>\n<pre><strong><span style=\"color: #008000\">post.mean l-95% CI u-95% CI eff.samp\r\n units 0.1751 0.1189 0.2356 3639<\/span><\/strong><\/pre>\n<pre><strong><span style=\"color: #008000\">Location effects: y ~ base + trt + bxt + age + v4<\/span><\/strong><\/pre>\n<pre><strong><span style=\"color: #008000\">           post.mean l-95% CI u-95% CI eff.samp pMCMC\r\n (Intercept) 1.59213  1.42740  1.75384 5924 &lt;1e-04 ***\r\n base        0.91739  0.64866  1.17659 8145 &lt;1e-04 ***\r\n trt        -0.76965 -1.52924 -0.01956 5279 0.046 *\r\n bxt         0.25788 -0.12209  0.64963 6500 0.196\r\n age         0.39816 -0.27944  1.07435 4970 0.256\r\n v4         -0.09743 -0.27435  0.08035 5071 0.293<\/span><\/strong><\/pre>\n<p>The estimates are similar to those given by WinBUGS.<\/p>\n<p>Our next\u00a0task is to understand the call to MCMCglmm, which is straightforward, and to understand the way that the priors are defined, which is not so straightforward.<\/p>\n<p>The call to MCMCglmm almost explains itself. The fixed part of the model comes first and this is followed by the random effects; in this case I have a single random effect for each\u00a0subject and the structure, R, defined in the prior provides the second random effect with a different value for each observation. The Poisson family\u00a0is given\u00a0a log-link by default. The analysis consists of 55000 iterations but the first 5000 of these are discarded and the remainder are thinned by only saving every 5th simulation.<\/p>\n<p>The first thing to know when trying to understand the priors is that in MCMCglmm all families are\u00a0given a residual variance structure, R, which in this case acts in addition to the randomness of the Poisson distribution. This residual variance is analogous to defining a random effect that takes a different value for each observation.<\/p>\n<p>The other random effects are collectively called G and if there are many of them they are distinguished as G1, G2 \u2026 In this case we only define a random effect for each subject.<\/p>\n<p>The priors on the variances of the random effects are inverse-Wishart distributions; one of the limitations of MCMCglmm\u00a0is that it insists on this distribution as it is conjugate and simplifies the MCMC algorithm.\u00a0When the random effects are independent, as they are here, they\u00a0have a single variance and the multi-dimensional inverse-Wishart distribution reduces to an inverse-gamma. We need to understand this parameterization in order to set the priors.<\/p>\n<p>MCMCglmm requires the user to parameterize the inverse-gamma prior by providing a guess at the true variance, <strong>V<\/strong>, and a parameter, <strong>nu<\/strong>, that expresses our certainty in the guess, a large value for nu and we are expressing strong belief in the accuracy of our guess at V.<\/p>\n<p>The standard deviation of log(variance) is approximately sqrt(2\/( nu-4)) so if we guess that V=3 and set nu=5 then we have 95% belief that the log(variance) will lie in the range log(3)\u00b12sqrt(2) or equivalently that the variance will lie in the range exp(log(3)\u00b12sqrt(2)) or between 0.2 and 50. Alternatively we could set nu=50 to reflect belief that the variance is most likely to be between 2 and 4.5.<\/p>\n<p>Previously we worked with the Gamma distribution for the precision and set \u03b1=8 and \u03b2=1\/2 so the expected precision was \u03b1\u03b2=4 corresponding to a variance of \u00bc and a standard deviation of \u00bd. The equivalent inverse- gamma just inverts the \u03b2 parameter giving \u03b1=8 and \u03b2=2. The IG has expected variance is \u03b2\/(\u03b1-1)=2\/7 corresponding to a standard deviation of 0.52 (remember that the average(variance) is only approximately 1\/average(precision)).<\/p>\n<p>In the example above I set V=2\/7 and nu=16 to reflect belief that the variance lies between 0.13 and 0.65;\u00a0this is quite informative but similar to my previous analyses of these data. This prior says that I believe that the standard deviation for the random effect will be in the range (0.36,0.80)<\/p>\n<p>Next, we ought to make a tour of the presentation of these results, which I will not try to defend.<\/p>\n<p>The two random variances (0.2607,0.1751) correspond to standard\u00a0deviations (0.51, 0.42); very similar to those that we obtained by WinBUGS.<\/p>\n<p>The idea of quoting 5 decimal places in the table of coefficients is fanciful, probably 2 decimal places are just about reliable given the number of iterations.<\/p>\n<p>The effective sample size is another way of presenting the standard error of the mean. Here the mixing is such that a sample of 10,000 values after thinning acts like a random sample of 5071 when we estimate the coefficient of v4. The standard deviation of the simulated v4 coefficients was 0.09285 so had mixing been perfect so that the simulations\u00a0were independent, the\u00a0standard error would have been 0.09285\/sqrt(10000)=0.0009285 but actually when we allow for the correlation it is 0.001285=0.09285\/sqrt(n) where n is 5071.<\/p>\n<p>The disadvantage of this presentation is that the sem is not easily derived so we cannot tell whether to trust 1,2,3,4 or 5 decimal places in the posterior means.<\/p>\n<p>pMCMC is a complete cheat. It is not a p-value at all despite the stars that R uses to indicate significance, but rather it is twice the probability that the true value is above or below zero. So, in\u00a0the 10,000 simulations v4 must have been negative most of the time but positive on 1465 iterations,\u00a0giving a 0.1465 chance that it is +ve and pMCMC=2*0.1465=0.293. pMCMC does express our confidence in the sign of the coefficient but the presentation invites confusion.<\/p>\n<p>The CI&#8217;s are not confidence intervals or even credible intervals but highest posterior density (HPD) intervals.<\/p>\n<p>I&#8217;ve explained the output so that we can see the parallels between this analysis and the analyses that we obtained from WinBUGS\u00a0and similar programs\u00a0in previous postings. However, a Stata user need not rely on this presentation, they will read the raw results into Stata and they can summarise them in any way they choose.<\/p>\n<p>The raw results are stored in the R list structure called\u00a0<strong>fit<\/strong>. We could write this structure to a text file and then read all or part of that file into Stata. In my previous postings I presented two Stata ado files called <strong>rdescribe.ado<\/strong> and <strong>rextract.ado<\/strong> that enable us to read R lists from files. The full process comes in two stages first we must add a couple of lines to our R code in order to write <strong>fit<\/strong> to a file, then we use rdescribe and rextract to read it in Stata.<\/p>\n<p>The extra R code is,<\/p>\n<p><span style=\"color: #008000\">sink(\u201cmyout.txt\u201d)<\/span><br \/>\n<span style=\"color: #008000\"> dput(fit)<\/span><br \/>\n<span style=\"color: #008000\"> sink()<\/span><\/p>\n<p>The first call to the function sink() redirects output away from the screen and into a file and the second call returns output to the screen. dput() writes the contents of the structure fit. In fact, we will run R in the background so the default output goes to a log file and not to the screen.<\/p>\n<p>To write the R script to a file and run it we can add the following lines to our Stata do file.<\/p>\n<p><span style=\"color: #0000ff\">wbsmodel MCMCglmm.do MCMCglmm.R<\/span><br \/>\n<span style=\"color: #0000ff\"> \/*<\/span><br \/>\n<span style=\"color: #0000ff\"> library(foreign)<\/span><br \/>\n<span style=\"color: #0000ff\"> library(MCMCglmm)<\/span><br \/>\n<span style=\"color: #0000ff\"> D &lt;- read.dta(file=&#8221;E:\/blog\/topics\/epilepsy\/epilepsyforR.dta&#8221;)<\/span><br \/>\n<span style=\"color: #0000ff\"> prior &lt;- list( B=list(mu=c(2.6,1,0,0,0,0),V=diag(1,6,6)),<\/span><br \/>\n<span style=\"color: #0000ff\"> G=list( G1=list(V=2\/7,n=16) ),R=list(V=2\/7,n=16) )<\/span><br \/>\n<span style=\"color: #0000ff\"> fit &lt;- MCMCglmm(y ~ base+trt+bxt+age+v4,random= ~subject,<\/span><br \/>\n<span style=\"color: #0000ff\"> prior=prior,family=&#8221;poisson&#8221;,nitt=55000,burnin=5000,thin=5,data=D)<\/span><br \/>\n<span style=\"color: #0000ff\"> summary(fit)<\/span><br \/>\n<span style=\"color: #0000ff\"> sink(&#8220;myout.txt&#8221;)<\/span><br \/>\n<span style=\"color: #0000ff\"> dput(fit)<\/span><br \/>\n<span style=\"color: #0000ff\"> sink()<\/span><br \/>\n<span style=\"color: #0000ff\"> *\/<\/span><br \/>\n<span style=\"color: #0000ff\"> execute R CMD BATCH MCMCglmm.R log.txt<\/span><\/p>\n<p>This code\u00a0uses\u00a0the <strong>wbsmodel<\/strong> command that I wrote for creating WinBUGS model files to copy the R commands to a text file called MCMCglmm.R, then the commands\u00a0are sent to R using\u00a0my <strong>execute<\/strong> command. Instructions for downloading all of my commands are given at the end of this posting.<\/p>\n<p><strong>execute<\/strong> assumes that you have set up a file called <strong>executables.txt<\/strong> in your personal directory containing a line such as,<br \/>\n<span style=\"color: #0000ff\">R,&#8221;Z:\\Software\\R-3.1.2\\bin\\x64\\R.exe&#8221;<\/span><br \/>\nThat is, it points to the location of the R.exe file on your system. The <strong>execute<\/strong> command sends the R output to log.txt. So in the log file you will find any error messages or, if everything works, you will find the results as produced by R&#8217;s summary(fit) command. The most convenient way to inspect the file is with the Stata command,<br \/>\n<span style=\"color: #0000ff\">view log.txt<\/span><\/p>\n<p>The raw numerical results are in <strong>myout.txt<\/strong>. R places a lot of information in the file and it is formatted as a complex list structure. To see what we have, give the Stata command,<br \/>\n<span style=\"color: #0000ff\">rdescribe using myout.txt<\/span><br \/>\nand you will be told that we have,<\/p>\n<p><span style=\"color: #0000ff\">Structures and sub-structures within file: myout.txt<\/span><br \/>\n<span style=\"color: #0000ff\"> Sol (matrix or set of variables)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 Dim: 10000, 6<\/span><br \/>\n<span style=\"color: #0000ff\">Lambda (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> VCV (matrix or set of variables)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 Dim: 10000, 2<\/span><br \/>\n<span style=\"color: #0000ff\"> CP (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> Liab (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> Fixed (list)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 formula (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nfl (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nll (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> Random (list)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 formula (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nfl (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nrl (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nrt (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> Residual (list)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 formula (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nfl (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nrl (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 nrt (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> Deviance (matrix or set of variables)<\/span><br \/>\n<span style=\"color: #0000ff\"> DIC (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">X (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> Z (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> XL (single value)<\/span><br \/>\n<span style=\"color: #0000ff\"> error.term (single variable)<\/span><br \/>\n<span style=\"color: #0000ff\"> family (single variable)<\/span><br \/>\n<span style=\"color: #0000ff\"> Tune (single value)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 Dim: 1, 1<\/span><\/p>\n<p>The key information is held in<strong> Sol<\/strong> (presumably solution). This contains the 10000 simulations of the 6 regression coefficients. The variances of the random effects are in <strong>VCV<\/strong>.<\/p>\n<p>We can read Sol into Stata, rename the variables and plot them using the Stata commands<\/p>\n<p><span style=\"color: #0000ff\">rextract using myout.txt, item(Sol) clear<\/span><\/p>\n<p><span style=\"color: #0000ff\">rename Sol_1 cons<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_2 base<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_3 trt<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_4 bxt<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_5 age<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_6 v4<\/span><\/p>\n<p><span style=\"color: #0000ff\">mcmctrace cons base trt bxt age v4<\/span><\/p>\n<p>The mixing is excellent and in log.txt we will find that the timing was good also at about 18 seconds. Better than WinBUGS etc. although of course this program only fits glmms.<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/04\/Rtrace-e1429007987705.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-638\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/04\/Rtrace-e1429007987705.png\" alt=\"Rtrace\" width=\"500\" height=\"363\" \/><\/a><\/p>\n<p>Next time I will present a Stata ado file that acts as an interface to MCMCglmm so that the Stata user does not need to know the R syntax.<\/p>\n<p>If you want to download any of the programs referred to in this posting they are available from my webpage at <a href=\"https:\/\/www2.le.ac.uk\/Members\/trj\">https:\/\/www2.le.ac.uk\/Members\/trj<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>I always advise\u00a0students not to tie themselves to one piece of statistical software, even when it is as good as Stata. It is inevitable that whichever statistical package\u00a0they choose, there will be tasks that are easier in another program or analyses that\u00a0their favourite does not offer. The problem with this advice is that it takes [&hellip;]<\/p>\n","protected":false},"author":134,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[58,33,4],"class_list":["post-627","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-mcmcglmm","tag-r","tag-stata"],"_links":{"self":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/627","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/users\/134"}],"replies":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/comments?post=627"}],"version-history":[{"count":15,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/627\/revisions"}],"predecessor-version":[{"id":658,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/627\/revisions\/658"}],"wp:attachment":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/media?parent=627"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/categories?post=627"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/tags?post=627"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}