{"id":120,"date":"2014-05-09T14:13:08","date_gmt":"2014-05-09T14:13:08","guid":{"rendered":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/?p=120"},"modified":"2025-02-26T13:21:39","modified_gmt":"2025-02-26T13:21:39","slug":"mixtures-of-normal-distributions","status":"publish","type":"post","link":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/05\/09\/mixtures-of-normal-distributions\/","title":{"rendered":"Mixtures of Normal Distributions"},"content":{"rendered":"<p>In my last posting I started a library of Mata functions for use in Bayesian and this week I will add a function that fits mixtures of normal distribution using a Bayesian Gibbs sampling algorithm.<\/p>\n<p>The normal distribution is the\u00a0underlying assumption\u00a0for\u00a0many statistical models and data are often transformed to make their distribution look normal\u00a0so that these standard methods can be used. When we cannot find an appropriate transformation to normality, we are forced to\u00a0consider other distributions with a more appropriate shape, but the theory behind analyses\u00a0that use\u00a0non-normal distributions is often appreciably more complex. One way to maintain the nice mathematical properties of the normal distribution but to gain far greater flexibility of shape is to mix together two or more normal distributions. The example below shows how a non-symmetrical distribution with two modes and a heavy tail can be formed from a mixture of three normal distributions.<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/mixture.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-121\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/mixture-300x218.png\" alt=\"mixture\" width=\"300\" height=\"218\" srcset=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/mixture-300x218.png 300w, https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/mixture.png 716w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>This mixture distribution\u00a0is made up of a set of component normal distributions and we can generalize this approach into higher dimensions in a simple Bayesian analysis provided that we are willing to use multivariate normal priors for the means of the components, Wishart priors for the precision matrices of the components and a Dirichlet prior for the proportions from each component.\u00a0Under these conditions\u00a0all of the distributions\u00a0will be\u00a0conjugate and it is trivial to derive a Gibbs sampler for fitting the model. The algorithm starts with initial parameter estimates and an initial allocation of subjects\/items to the components and then the algorithm cycles repeatedly through 4 steps:<\/p>\n<ol>\n<li>Update the mean for each component normal distribution using data on the subjects currently allocated to that component<\/li>\n<li>Update the variance matrix for each component using data on subjects currently allocated to that component<\/li>\n<li>Update the probabilities of belonging to each component using the current numbers allocated to each component<\/li>\n<li>Update the allocation of subjects to components using the probabilities that a person\u2019s data would be generated by each component<\/li>\n<\/ol>\n<p>My Mata function that implements this Gibbs sampler is called <strong>mixMNormal<\/strong>() and has been added to the library <strong>libbayes<\/strong> and instructions for downloading it are given at the end of this posting. The general method for creating <strong>libbayes<\/strong> was discussed in a previous posting, \u2018<a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2014\/05\/02\/creating-a-mata-library\/\"><em>Creating a Mata Library<\/em><\/a>\u2019.<\/p>\n<p>The function, <strong>mixMNormal<\/strong>(),\u00a0needs to able to cope with any number of components, each\u00a0with its\u00a0own precision matrix but unfortunately\u00a0Mata does not\u00a0allow arrays of matrices.\u00a0My solution is to pack the matrices into one large matrix by adding them to the right. So suppose that we have bivariate data so that the precision matrices Ti are all 2&#215;2 and let\u2019s imagine that there are three components in the mixture, so i=1,2,3. The three precision matrices are packed into a 2&#215;6 matrix as T=[T1,T2,T3].<\/p>\n<p>To unpack the matrices we can use Mata\u2019s powerful range indexing. In which T[|1,3\\2,4|] extracts the matrix with top left element (1,3) and bottom right element (2,4). In our example with would extract T2 from T.<\/p>\n<p>Finally if you look at the code for\u00a0 <strong>mixMNormal<\/strong>() you will see that it writes the parameter estimates from each update to a comma delimited file so they can be read back into Stata using the <strong>-insheet-<\/strong> or \u2013<strong>import<\/strong>&#8211; commands. There is no equivalent of the <strong>-post-<\/strong> command in Mata that would write directly to a .dta file.<\/p>\n<p>An unfortunate feature of Mata is that were the program to crash (it shouldn\u2019t but you never know), or should you stop the run before it has finished, then the comma delimited results\u00a0file will be left open and any future attempt to use that file name during the current\u00a0session will result in an error (the file is only closed when you close Stata). As far as I know there is no other way to close such an open file, so to continue\u00a0you must\u00a0select a fresh name for the csv file.<\/p>\n<p>To use\u00a0the Mata function from within Stata\u00a0it is convenient to create\u00a0a Stata program that calls mixMNormal(). You could make this\u00a0very elaborate by, for instance, adding the facility for if or in, or making data checks for missing data, or checks on the sizes of matrices; however, the following code does the basic job.<\/p>\n<pre><span style=\"color: #0000ff\"><strong>program mixmnormal <\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 syntax varlist , MU(string) T(string) P(string) n(integer) \/\/\/<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 M(string) B(string) R(string) DF(string) C(string) ALPHA(string) \/\/\/<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 BUrnin(integer) Updates(integer) Filename(string)<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 tempname Y<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 <\/strong><strong>mata: mata clear<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 qui putmata `Y'=(`varlist')<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 <\/strong><strong>local list \"`mu' `r' `df' `m' `b' `t' `c' `alpha' `p'\"<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 foreach item of local list {<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0\u00a0\u00a0 mata : `item' = st_matrix(\"`item'\")<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 }<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 cap erase \"`filename'\"<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 mata: mixMNormal(`Y',`mu',`t',`p',`n',`m',`b',`r',`df', \\\\\\<\/strong><\/span><\/pre>\n<pre><span style=\"color: #0000ff\"><strong>\u00a0\u00a0\u00a0\u00a0 `c',`alpha',`burnin',`updates',\"`filename'\")<\/strong><\/span><\/pre>\n<pre><strong><span style=\"color: #0000ff\">end<\/span> <\/strong><\/pre>\n<p>This code copies the data and the matrices from Stata\u2019s memory into Mata\u2019s memory and then calls mixMNormal(). It is included in the download file referred to at the end of the posting for anyone who wants to try it.<\/p>\n<p>Although the code works with any dimension of\u00a0data we will return to one dimension for the example since this is easier to visualize. We will model the distribution of fish lengths given in a much analysed data set referred to in the book by Titterington et al <em>Statistical analysis of finite mixture distributions<\/em> Wiley 1985 and subsequently supplied in the R package <strong>bayesmix<\/strong>. The data set contains the lengths (presumably in inches) of 256 snappers.<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fish.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter  wp-image-122\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fish.png\" alt=\"fish\" width=\"452\" height=\"329\" srcset=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fish.png 716w, https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fish-300x218.png 300w\" sizes=\"auto, (max-width: 452px) 100vw, 452px\" \/><\/a><\/p>\n<p>In our first analysis\u00a0we will fit a mixture with 6 components and set the parameter for the Dirichlet prior to (20,20,10,5,1,1). Thus we can think of our prior knowledge as being equivalent to having a previous sample of\u00a057 fish in which we found\u00a0two\u00a0common components, two rare components and two in between. Given that there are only 256 fish in the actual data, this prior will be equivalent to about\u00a01\/5 of\u00a0the data.<\/p>\n<p>In the absence of real knowledge about the locations of the components, we will set the priors on the means to all be Normal with mean 6 and precision=0.1. That is\u00a0the prior on the component means has a\u00a0standard deviation of 3.2=1\/sqrt(0.1) so we expect the components to have means in the approximate range (0,12). Finally we must place a prior on the precision of each component. In one dimension a Wishart W(R,k) is equivalent to a Gamma distribution G(k\/2,2\/R) so it has mean precision of k\/R, or put another way, a variance of R\/k. If we set k=3 and R=3, then we expect the components to have a standard deviation of 1, but the distribution of the precision would be<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishprior.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-123\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishprior-300x218.png\" alt=\"fishprior\" width=\"300\" height=\"218\" srcset=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishprior-300x218.png 300w, https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishprior.png 716w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>So we would not be surprised by a precision of 0.25 (st dev=2) or a precision of 4 (st dev=0.5). What this prior will do, is avoid components with very small standard deviations, so it will tend to avoid fitting a component to a single fish.<\/p>\n<p>Putting this together we have<\/p>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix R = (3,3,3,3,3,3)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix df = (3,3,3,3,3,3)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix M = (6,6,6,6,6,6)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix B = (0.1,0.1,0.1,0.1,0.1,0.1)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix ALPHA = (20,20,10,5,1,1)<\/strong><\/span><\/pre>\n<p>The initial values are not critical provided that we run a long enough burnin. I chose,<\/p>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix T = (1,1,1,1,1,1)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix C = J(256,1,1)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>forvalues i=1\/256 {<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 matrix C[`i',1] = 1 + int(6*runiform())<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>}<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix P = J(1,6,1\/6)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>matrix MU = M<\/strong><\/span><\/pre>\n<p>&nbsp;<\/p>\n<p>Then I ran a burnin of 2500 before plotting five fitted curves 1000 iterations apart<\/p>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>mixmnormal fish , mu(MU) t(T) p(P) n(6) m(M) b(B) r(R) \/\/\/<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 df(df) c(C) alpha(ALPHA) \/\/\/<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 <\/strong><strong>burnin(2500) updates(5000) filename(temp.csv)<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>insheet using temp.csv, clear<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>set obs 1001<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>range y 0 15 1001<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>foreach iter of numlist 1000 2000 3000 4000 5000 {<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 gen f`iter' = 0<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 forvalues s=1\/6 {<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 qui replace f`iter' = f`iter' + p`s'[`iter']* \/\/\/ <\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0\u00a0\u00a0 normalden(y,mu`s'_1[`iter'],1\/sqrt(t`s'_1_1[`iter']))<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 <\/strong><strong>}<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>}<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>merge 1:1 _n using fish.dta<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>histogram fish , start(2) width(0.25) xtitle(Fish length) \/\/\/<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 addplot((line f1000 y, lpat(solid) lcol(blue)) \/\/\/<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 (line f2000 y, lpat(solid) lcol(red)) \/\/\/<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 (line f3000 y, lpat(solid) lcol(green)) \/\/\/<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 (line f4000 y, lpat(solid) lcol(black)) \/\/\/<\/strong><\/span><\/pre>\n<pre style=\"padding-left: 30px\"><span style=\"color: #0000ff\"><strong>\u00a0\u00a0 (line f5000 y, lpat(solid) lcol(orange)) ) leg(off)<\/strong><\/span><\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-125\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishfit2-1024x752.png\" alt=\"fishfit2\" width=\"620\" height=\"455\" srcset=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishfit2-1024x752.png 1024w, https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishfit2-300x220.png 300w\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" \/><\/p>\n<p>The\u00a0coloured lines show the uncertainty in the model fit and are taken from 5 different points in the chain. Presumably these components correspond to fish of different ages that have grown to different sizes.<\/p>\n<p>The table of\u00a0mean parameter estimates\u00a0is:<\/p>\n<table>\n<tbody>\n<tr>\n<td style=\"text-align: center\" width=\"92\"><strong>Component<\/strong><\/td>\n<td style=\"text-align: center\" width=\"123\"><strong>Percentage<\/strong><\/td>\n<td style=\"text-align: center\" width=\"113\"><strong>Mean<\/strong><\/td>\n<td style=\"text-align: center\" width=\"104\"><strong>sd<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center\" width=\"92\"><strong>1<\/strong><\/td>\n<td style=\"text-align: center\" width=\"123\"><strong>10%<\/strong><\/td>\n<td style=\"text-align: center\" width=\"113\"><strong>3.4<\/strong><\/td>\n<td style=\"text-align: center\" width=\"104\"><strong>0.50<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center\" width=\"92\"><strong>2<\/strong><\/td>\n<td style=\"text-align: center\" width=\"123\"><strong>32%<\/strong><\/td>\n<td style=\"text-align: center\" width=\"113\"><strong>5.0<\/strong><\/td>\n<td style=\"text-align: center\" width=\"104\"><strong>0.36<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center\" width=\"92\"><strong>3<\/strong><\/td>\n<td style=\"text-align: center\" width=\"123\"><strong>24%<\/strong><\/td>\n<td style=\"text-align: center\" width=\"113\"><strong>5.9<\/strong><\/td>\n<td style=\"text-align: center\" width=\"104\"><strong>0.82<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center\" width=\"92\"><strong>4<\/strong><\/td>\n<td style=\"text-align: center\" width=\"123\"><strong>19%<\/strong><\/td>\n<td style=\"text-align: center\" width=\"113\"><strong>7.4<\/strong><\/td>\n<td style=\"text-align: center\" width=\"104\"><strong>0.48<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center\" width=\"92\"><strong>5<\/strong><\/td>\n<td style=\"text-align: center\" width=\"123\"><strong>10%<\/strong><\/td>\n<td style=\"text-align: center\" width=\"113\"><strong>9.5<\/strong><\/td>\n<td style=\"text-align: center\" width=\"104\"><strong>0.82<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center\" width=\"92\"><strong>6<\/strong><\/td>\n<td style=\"text-align: center\" width=\"123\"><strong>3%<\/strong><\/td>\n<td style=\"text-align: center\" width=\"113\"><strong>10.8<\/strong><\/td>\n<td style=\"text-align: center\" width=\"104\"><strong>0.94<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Notice that now both components 5 and 6 have relatively\u00a0large standard deviations and that these standard deviations are big enough that the bumps\u00a0centred on\u00a0the corresponding\u00a0means will tend to overlap. Keeping this in mind let us look at the trace plot of the 5000 simulations.<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishtrace.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-126\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishtrace-1024x752.png\" alt=\"fishtrace\" width=\"620\" height=\"455\" srcset=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishtrace-1024x752.png 1024w, https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishtrace-300x220.png 300w\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" \/><\/a><\/p>\n<p>The solutions for the first 4 components are quite stable but because the last two overlap slightly the algorithm cannot decide whether to call the component with mean 9, number 5, and the component with mean 11, number 6, or to label them the other way around. In fact it jumps between the two solutions at about simulation 2500 and then jumps back again about a 1,000 simulations later. It even looks as if another brief switch took place at around iteration 1,000 that also affected component 4. Because of this label switching it is not sensible to take the average of mu5 and call it the mean of a component;\u00a0mu5 actually represents different component means at different times.<\/p>\n<p>I think that I will return to the problem of label switching in my next posting. Meanwhile if you would like to try this analysis for yourself, the code that I used can be downloaded from <a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2014\/05\/fishPrograms.pdf\">fishPrograms<\/a>.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In my last posting I started a library of Mata functions for use in Bayesian and this week I will add a function that fits mixtures of normal distribution using a Bayesian Gibbs sampling algorithm. The normal distribution is the\u00a0underlying assumption\u00a0for\u00a0many statistical models and data are often transformed to make their distribution look normal\u00a0so that [&hellip;]<\/p>\n","protected":false},"author":134,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[17,16],"class_list":["post-120","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-bayesian-mixture-model","tag-mata"],"_links":{"self":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/120","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/users\/134"}],"replies":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/comments?post=120"}],"version-history":[{"count":7,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/120\/revisions"}],"predecessor-version":[{"id":140,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/120\/revisions\/140"}],"wp:attachment":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/media?parent=120"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/categories?post=120"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/tags?post=120"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}