{"id":634,"date":"2015-04-24T08:14:28","date_gmt":"2015-04-24T08:14:28","guid":{"rendered":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/?p=634"},"modified":"2025-02-26T13:21:37","modified_gmt":"2025-02-26T13:21:37","slug":"mcmcglmm-in-stata-part-2","status":"publish","type":"post","link":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2015\/04\/24\/mcmcglmm-in-stata-part-2\/","title":{"rendered":"MCMCglmm in Stata (Part 2)"},"content":{"rendered":"<p>Last time, I showed how we can write R code for fitting a Bayesian Generalized Linear Mixed Model and insert it into a Stata do file, so that the computation is performed by the R program, <strong>MCMCglmm<\/strong>, while the results are available in Stata. The drawback of that approach is that it requires the user to understand the R syntax. This week, I am going to present a Stata command that writes the R code automatically so that a Stata user can fit models without knowing anything\u00a0about R.<\/p>\n<p>Rather than work through the code in the ado file that defines the command, I want to concentrate on some examples of its use, so I will give only\u00a0the briefest description of how it works.<\/p>\n<p>The R code for using MCMCglmm follows the same pattern for every model except for specific options such as, the distribution, the explanatory variables, the length of the MCMC chain and\u00a0the structure of the random effects. So I have produced a template called <strong>rtmcmcglmm.txt<\/strong> that I store in a subdirectory of my PERSONAL folder called <strong>RTEMPLATES<\/strong>. This template includes the R code for calling MCMCglmm, except that where there is something that might change, I replace it with an @. Then I have an ado file called <strong>rtemplates.ado<\/strong> that re-writes the file inserting specified items where it finds an @.<\/p>\n<p>My\u00a0command <strong>mcmcglmm.ado<\/strong> can be called from Stata and works out the required items for inserting into the template based on Stata options set by the user. So, to the user, it looks like any other Stata command.<\/p>\n<p>I have taken my examples from Sophia Rabe-Hesketh and Anders Skrondal\u2019s book entitled, <em>Multilevel and Longitudinal Modelling using Stata<\/em>. I only have the one volume, original edition but I believe that MCMCglmm will fit every model covered in that book and a great many more besides. It also has the added advantage of providing Bayesian fitting so we have the choice of either setting informative priors, or of getting very close to the ML solution by setting vague priors. However, nothing is perfect and in these examples we will see some of the problems with MCMCglmm.<\/p>\n<p>Let\u2019s start with a very simple linear model with normal errors and a random intercept. The book illustrates this situation with some data on qualifications obtained by 4059 children from 65 schools in inner-London. The data are available to download from <a href=\"http:\/\/www.stata-press.com\/data\/mlmus\/gcse\">http:\/\/www.stata-press.com\/data\/mlmus\/gcse<\/a><\/p>\n<p>In the data there is a standardised qualification score, gcse, that was measured at age 16 and an explanatory variable, lrt, that is a standardised reading score from when the children started at the school aged 11.<\/p>\n<p>In the random intercept model we imagine that the average gcse score will vary between schools for all sorts of reasons, such as the quality of the teaching and the average socio-economic background of the children. Since these factors are unmeasured, we allow for them through a random intercept with a different value for each school.<\/p>\n<p>To analyse these data with MCMCglmm&#8217; s default of\u00a0very vague priors we\u00a0could use the\u00a0Stata program,<\/p>\n<p><span style=\"color: #0000ff\">use http:\/\/stata-press.com\/data\/mlmus\/gcse, clear<\/span><br \/>\n<span style=\"color: #0000ff\"> saveold gcse.dta , replace version(12)<\/span><\/p>\n<p><span style=\"color: #0000ff\">mcmcglmm gcse lrt, dta(gcse.dta) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 family(gaussian) burnin(1000) updates(20000) thin(2) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 random(school) log(log.txt) out(myout.txt)<\/span><\/p>\n<p><span style=\"color: #0000ff\">view log.txt<\/span><\/p>\n<p><span style=\"color: #0000ff\">rextract using myout.txt, item(Sol) clear<\/span><br \/>\n<span style=\"color: #0000ff\"> rextract using myout.txt, item(VCV) add<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_1 cons<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_2 lrt<\/span><br \/>\n<span style=\"color: #0000ff\"> gen sd_s = sqrt(VCV_1)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen sd_r = sqrt(VCV_2)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen rho = VCV_1\/(VCV_1+VCV_2)<\/span><br \/>\n<span style=\"color: #0000ff\"> mcmctrace cons lrt sd_s sd_r<\/span><br \/>\n<span style=\"color: #0000ff\">mcmcstats cons lrt sd_s sd_r rho<\/span><\/p>\n<p>I\u00a0was not sure what version of Stata the data were stored under so I read the data and saved it in an old format that R can read. Next I use my <strong>mcmcglmm<\/strong> command, which gives the specification of the\u00a0options a Stata-like feel. The basic model is <strong>gcse=cons+lrt<\/strong>, hence these variables follow the command as they would with regress. The <strong>dta<\/strong> option allows me to specify the source of the data, we are using normal errors regression so the <strong>family<\/strong> is gaussian and I specify the length of the MCMC run. The<strong> random<\/strong> term has a different intercept for each school, the summarised results will be sent to <strong>log.txt<\/strong> and the raw results will go to <strong>myout.txt<\/strong>.<\/p>\n<p>I check the log file with the <strong>view<\/strong> command to make sure that there were no problems, then I read the simulated regression coefficients (<strong>Sol<\/strong>) and the simulated variances (<strong>VCV<\/strong>) from myout.txt as described last week. I do a bit of renaming, I check the traces (mixing is very good so I will not show them) and I summarise the results including the proportion of the variance explained by the random intercept.<\/p>\n<pre>-----------------------------------------------------------\r\n Parameter   n  mean    sd    sem median     95% CrI\r\n ----------------------------------------------------------\r\n cons    10000 0.018 0.403 0.0040 0.012 ( -0.767, 0.816 )\r\n lrt     10000 0.563 0.012 0.0001 0.564 ( 0.539, 0.587 )\r\n sd_s    10000 3.096 0.320 0.0033 3.075 ( 2.524, 3.776 )\r\n sd_r    10000 7.525 0.085 0.0009 7.524 ( 7.360, 7.694 )\r\n rho     10000 0.145 0.026 0.0003 0.143 ( 0.100, 0.202 )\r\n ----------------------------------------------------------<\/pre>\n<p>The results are within sampling error of those given by <strong>xtmixed<\/strong>, with the advantage that the credible intervals do not rely on normal approximations. If you want more decimal places of accuracy then just increase the length of the MCMC run.<\/p>\n<p>I have fitted the model as in <em>Multilevel and Longitudinal Modelling using Stata<\/em> but I think that we would have a strong case for omitting the constant from the model given that y and x are standardised. You could do this with the noconstant option.<\/p>\n<p><span style=\"color: #0000ff\">mcmcglmm gcse lrt, dta(gcse.dta) nocons \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\"> family(gaussian) burnin(1000) updates(20000) thin(2) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\"> random(school) log(log.txt) out(myout.txt)<\/span><\/p>\n<p>The next question to ask is whether the slope really is 0.56 for every school, perhaps performance of children in some schools in more dependent on their ability when they arrive than it is in other schools. Now we want a random intercept and a random slope. I\u2019ll just give the bits of the code that change,<\/p>\n<p><span style=\"color: #0000ff\">mcmcglmm gcse lrt , dta(gcse.dta) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 family(gaussian) burnin(1000) updates(20000) thin(2) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0random(idh(1+lrt):school) log(log.txt) out(myout.txt)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen sd_s = sqrt(VCV_1)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen sd_sl = sqrt(VCV_2)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen sd_r = sqrt(VCV_3)<\/span><br \/>\n<span style=\"color: #0000ff\"> mcmcstats cons lrt sd_s sd_sl sd_r<\/span><\/p>\n<p>To obtain both a random intercept and a random slope we specify<br \/>\n<strong>idh(1+lrt):school<\/strong><br \/>\nwhich I think is much more intuitive than xtmixed\u2019s<br \/>\n<strong>|| school: lrt, cov(ind)<\/strong><br \/>\nIn\u00a0MCMCglmm&#8217;s notation the school random effect acts on intercept (1) and the coefficient (lrt), hence 1+lrt and idh requests independence between the two random effects.<\/p>\n<p>The analysis takes about 30 seconds and gives results equivalent to those provided by xtmixed.<\/p>\n<pre>-----------------------------------------------------------\r\n Parameter\u00a0\u00a0 n\u00a0 mean\u00a0\u00a0\u00a0 sd\u00a0\u00a0\u00a0 sem median\u00a0\u00a0\u00a0\u00a0 95% CrI\r\n ----------------------------------------------------------\r\n cons    10000 -0.077 0.401 0.0040 -0.082 ( -0.862, 0.713 )\r\n lrt     10000  0.557 0.020 0.0002  0.557 ( 0.517, 0.596 )\r\n sd_s    10000  3.063 0.317 0.0034  3.040 ( 2.500, 3.743 )\r\n sd_sl   10000  0.120 0.020 0.0003  0.119 ( 0.085, 0.162 )\r\n sd_r    10000  7.443 0.085 0.0008  7.443 ( 7.281, 7.611 )\r\n -----------------------------------------------------------<\/pre>\n<p>It is possible, even likely, that the two random effects will not be independent. Schools that do well overall might be more or less influenced by the ability of the children when they start. So it would be interesting to allow the random effects to be correlated. We do this by a simple change to the random option,<br \/>\n<strong>us(1+lrt):school<\/strong><br \/>\nwhere us stands for unstructured (think USA).<\/p>\n<p>Interestingly the MCMC analysis has trouble differentiating between all of the sources of variance. This is an example where it is better to have realistically vague priors. So I set,<\/p>\n<p><span style=\"color: #0000ff\">matrix v = (10,0,0.01)<\/span><br \/>\n<span style=\"color: #0000ff\"> mcmcglmm gcse lrt, dta(gcse.dta) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 family(gaussian) burnin(1000) updates(20000) thin(2) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 rprior(V=50 nu=5) gprior(V=v nu=5) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 random(us(1+lrt):school) log(log.txt) out(myout.txt)<\/span><\/p>\n<p>I set the value of nu (my confidence in my guess at the variance) to be quite low so I am still having very little impact on the results, for example my prior says that the residual variance is likely to be between 3 and 845. Even this small guidance is enough to get the algorithm to converge and it gives,<\/p>\n<pre>-------------------------------------------------------------\r\n Parameter   n   mean    sd    sem median   95% CrI\r\n ------------------------------------------------------------\r\n cons    10000 -0.108 0.408 0.0041 -0.111 ( -0.921, 0.682 )\r\n lrt     10000  0.556 0.020 0.0002  0.556 ( 0.516, 0.595 )\r\n sd_s    10000  3.113 0.308 0.0033  3.093 ( 2.577, 3.772 )\r\n sd_sl   10000  0.122 0.018 0.0003  0.121 ( 0.090, 0.160 )\r\n r       10000  0.420 0.142 0.0019  0.432 ( 0.116, 0.667 )\r\n sd_r    10000  7.442 0.085 0.0008  7.440 ( 7.281, 7.613 )\r\n -------------- ----------------------------------------------<\/pre>\n<p>Once again we are\u00a0 very close to xtmixed\u2019s maximum likelihood solution. You can see the source of the convergence problem\u00a0by looking at the\u00a0the choice that I made for the prior variance matrix v that put things right. The variance on the intercept is about 1000 times greater than the variance on the slope. We could have had much the same effect on convergence by scaling lrt differently.<\/p>\n<p>So I tried,<br \/>\n<span style=\"color: #0000ff\">use http:\/\/stata-press.com\/data\/mlmus\/gcse, clear<\/span><br \/>\n<span style=\"color: #0000ff\"> replace lrt = lrt\/10<\/span><br \/>\n<span style=\"color: #0000ff\"> saveold gcse.dta , replace version(12)<\/span><br \/>\n<span style=\"color: #0000ff\"> mcmcglmm gcse lrt using temp.R, dta(gcse.dta) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 family(gaussian) burnin(1000) updates(20000) thin(2) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 random(us(1+lrt):school) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 log(log.txt) out(myout.txt)<\/span><\/p>\n<p>Now there are no convergence issues and (apart from the factor of 10) we get the same answer as xtmixed to within sampling error.<\/p>\n<pre> ----------------------------------------------------------------------------\r\nParameter    n   mean    sd    sem median    95% CrI<\/pre>\n<pre>----------------------------------------------------------------------------\r\n cons    10000 -0.113 0.408 0.0041 -0.113 ( -0.914, 0.681 )\r\n lrt     10000  5.568 0.206 0.0021  5.569 ( 5.161, 5.971 )\r\n sd_s    10000  3.092 0.319 0.0034  3.067 ( 2.538, 3.773 )\r\n sd_sl   10000  1.221 0.203 0.0036  1.213 ( 0.846, 1.643 )\r\n r       10000  0.506 0.154 0.0024  0.517 ( 0.172, 0.772 )\r\n sd_r    10000  7.445 0.084 0.0008  7.444 ( 7.282, 7.612 )\r\n ----------------------------------------------------------------------------<\/pre>\n<p>Interestingly the model also fits without problems if we set the noconstant option, as perhaps we should.<\/p>\n<p>Now let\u2019s try a model with a binary outcome. Rabe-Hesketh and Skondal analyse data from a trial of the treatment of toenail fungus infection.<br \/>\n<a href=\"http:\/\/www.stata-press.com\/data\/mlmus\/toenail\">http:\/\/www.stata-press.com\/data\/mlmus\/toenail<\/a><br \/>\nRandom effects in binary models are notoriously difficult to fit and in the book they use numerical integration in <strong>gllamm<\/strong> to obtain ML estimates.<\/p>\n<p>The file contains data on 294 patients randomized to either itraconazole or terbinafine. They are monitored regularly over\u00a0an eighteen\u00a0month period and assessed on whether or not the toenail has separated from the nail bed. In total there are 1908 measurements.<\/p>\n<p>Now this is a poor example on two counts. First these drugs are known to take about six months to work, so there is unlikely to be\u00a0any change at first and then a fairly sudden effect, modelling this with a linear term in time may not be very effective. Second, if your toenail is separated from the bed at one visit, there is a very high chance that it will still be separated on the next visit; so we expect high autocorrelation. Let\u2019s treat the example as being purely for illustration.<\/p>\n<p>The model fitted by Rabe-Hesketh and Skondal has a random effect for patient. We have a problem with MCMCglmm in that it always fits a residual variance, which in this case we do not want. We could set a prior on the residual variance that forces it to be very small, in fact MCMCglmm allows you to fix a variance completely, so that it is not estimated as in,<\/p>\n<p><span style=\"color: #0000ff\">mcmcglmm outcome trt mon txm using temp.R, dta(toenail.dta) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 family(categorical) burnin(10000) updates(100000) thin(10) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 gprior(V=1 nu = 5) rprior(V=0.001 fix) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 random(patient) log(log.txt) out(myout.txt)<\/span><\/p>\n<p>Unfortunately the resulting MCMC algorithm mixes extremely poorly (we are near the boundary of the parameter space for the residual variance).<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/04\/mcmcglmmtrace1-e1429198101374.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-652\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/04\/mcmcglmmtrace1-e1429198101374.png\" alt=\"mcmcglmmtrace1\" width=\"500\" height=\"366\" \/><\/a><\/p>\n<p>When, as here, we fit both a residual variance and a patient random effect there is almost no information in the data to distinguish them. What is well-defined, is the intraclass correlation ICC, i.e the proportion of the variance that is ascribed to the patient term.<\/p>\n<p><span style=\"color: #0000ff\">ICC = Var(patient) \/ ( Var(patient)+Var(residual)+\u03c0<sup>2<\/sup>\/3)<\/span><\/p>\n<p>The term \u03c0<sup>2<\/sup>\/3 represents the variance that comes from the binomial distribution. So if Var(patient_r) represents the variance of the patient random effect when the residual variance is equal to r we have,<\/p>\n<p><span style=\"color: #0000ff\">Var(patient_1) \/ ( Var(patient_1)+1+\u03c0<sup>2<\/sup>\/3) = Var(patient_0) \/ ( Var(patient_0)+\u03c0<sup>2<\/sup>\/3)<\/span><\/p>\n<p>A little algebra tells us that<br \/>\n<span style=\"color: #0000ff\">Var(patient_0) = Var(patient_1) \u03c0<sup>2<\/sup>\/(3+ \u03c0<sup>2<\/sup>) Var(patient_1) = 0.7669 Var(patient_1)<\/span><br \/>\nor<br \/>\n<span style=\"color: #0000ff\">sd(patient_0) = 0.8757 sd(patient_1)<\/span><\/p>\n<p>So we can fit the model with the variance fixed at 1, so that we are not too near the edge of the parameter space, and deduce the solution that we would have got had the variance been fixed at 0. We need to scale the regression coefficients by the same factor of 0.8757.<\/p>\n<p>There are two objections to this method (a) the ICC calculation is based on an approximation (b) it is hard to set priors on something that is going to change after the analysis. But if you do not like it, then don\u2019t use MCMCglmm\u00a0for models that do not have a\u00a0residual variance term.<\/p>\n<p>Here is my full Stata program,<\/p>\n<p><span style=\"color: #0000ff\">use http:\/\/stata-press.com\/data\/mlmus\/toenail, clear<\/span><br \/>\n<span style=\"color: #0000ff\"> rename treatment trt<\/span><br \/>\n<span style=\"color: #0000ff\"> rename month mon<\/span><br \/>\n<span style=\"color: #0000ff\">gen txm = trt*mon<\/span><br \/>\n<span style=\"color: #0000ff\"> saveold toenail.dta , replace version(12)<\/span><\/p>\n<p><span style=\"color: #0000ff\">mcmcglmm outcome trt mon txm using temp.R, dta(toenail.dta) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 family(categorical) burnin(10000) updates(100000) thin(10) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 gprior(V=1 nu = 5) rprior(V=1 fix) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 random(patient) log(log.txt) out(myout.txt)<\/span><\/p>\n<p><span style=\"color: #0000ff\">view log.txt<\/span><\/p>\n<p><span style=\"color: #0000ff\">rextract using myout.txt, item(Sol) clear<\/span><br \/>\n<span style=\"color: #0000ff\"> rextract using myout.txt, item(VCV) add<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_1 cons<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_2 trt<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_3 mon<\/span><br \/>\n<span style=\"color: #0000ff\"> rename Sol_4 txm<\/span><br \/>\n<span style=\"color: #0000ff\"> gen sd_r = sqrt(VCV_1)<\/span><br \/>\n<span style=\"color: #0000ff\"> gen sd_p = sqrt(VCV_2)<\/span><br \/>\n<span style=\"color: #0000ff\"> foreach v in cons trt mon txm sd_p {<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 qui replace `v&#8217; = `v&#8217;*0.8757<\/span><br \/>\n<span style=\"color: #0000ff\"> }<\/span><br \/>\n<span style=\"color: #0000ff\"> gen icc = VCV_1\/(VCV_1+VCV_2+c(pi)^2\/3)<\/span><br \/>\n<span style=\"color: #0000ff\"> mcmctrace cons trt mon txm<\/span><br \/>\n<span style=\"color: #0000ff\">mcmcstats_v2 cons trt mon txm sd_r sd_p icc ,newey(100) ess<\/span><\/p>\n<p>Convergence is still not perfect<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/04\/mcmcglmmtrace2-e1429198240441.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-653\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/04\/mcmcglmmtrace2-e1429198240441.png\" alt=\"mcmcglmmtrace2\" width=\"500\" height=\"366\" \/><\/a><\/p>\n<p>But it is much better than it was. The results are<\/p>\n<pre>-----------------------------------------------------------\r\n Parameter   n   mean    sd  ess  median     95% CrI\r\n ----------------------------------------------------------\r\n cons    10000 -1.614 0.424  679 -1.609 ( -2.450, -0.823 )\r\n trt     10000 -0.189 0.566 1334 -0.178 ( -1.322, 0.930 )\r\n mon     10000 -0.383 0.042  369 -0.382 ( -0.468, -0.303 )\r\n txm     10000 -0.129 0.067  328 -0.128 ( -0.259, 0.000 )\r\n sd_p    10000  3.946 0.357  165  3.936 ( 3.273, 4.679 )\r\n icc     10000  0.823 0.027  164  0.825 ( 0.765, 0.869 )\r\n ---------------------------------------------------------<\/pre>\n<p>Very similar to the results found by gllamm in Rabe-Hesketh and Skondal. We might get even closer agreement by weakening the prior on the patient variance and running a longer chain.<\/p>\n<p>You might have noticed that this table includes ess, the <strong>effective sample size<\/strong>. I have been influenced my recent attempts to use MCMCglmm and Stan to add a couple of new options to mcmcstats including the effective sample size. I\u2019ll discuss that another time.<\/p>\n<p>As usual all of the programs referred in the posting can be downloaded from my webpage at <a href=\"https:\/\/www2.le.ac.uk\/Members\/trj\">https:\/\/www2.le.ac.uk\/Members\/trj<\/a>\u00a0and these include the version of mcmcstats that provided the effective sample size. Keep in mind that programs that I release as part of this blog are often under development and may change in the future.<\/p>\n<p>There is quite a lot more to say about MCMCglmm; saving the random effect estimates, making predictions, analysing other type of model, more complicated variance structures etc. etc. I&#8217;ll return to this topic in the future but next week I want to\u00a0start the process of explaining how <strong>Stan<\/strong> can be used with Stata. Stan is\u00a0another stand-alone Bayesian analysis program but\u00a0it takes a very different approach to WinBUGS and so opens up many new possibilities .<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Last time, I showed how we can write R code for fitting a Bayesian Generalized Linear Mixed Model and insert it into a Stata do file, so that the computation is performed by the R program, MCMCglmm, while the results are available in Stata. The drawback of that approach is that it requires the user [&hellip;]<\/p>\n","protected":false},"author":134,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[58,33,47,4],"class_list":["post-634","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-mcmcglmm","tag-r","tag-random-effects","tag-stata"],"_links":{"self":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/634","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/users\/134"}],"replies":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/comments?post=634"}],"version-history":[{"count":13,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/634\/revisions"}],"predecessor-version":[{"id":685,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/634\/revisions\/685"}],"wp:attachment":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/media?parent=634"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/categories?post=634"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/tags?post=634"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}