{"id":767,"date":"2015-08-14T08:03:49","date_gmt":"2015-08-14T08:03:49","guid":{"rendered":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/?p=767"},"modified":"2025-02-26T13:21:37","modified_gmt":"2025-02-26T13:21:37","slug":"bayesian-experimental-design-part-iii","status":"publish","type":"post","link":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2015\/08\/14\/bayesian-experimental-design-part-iii\/","title":{"rendered":"Bayesian Experimental Design Part III"},"content":{"rendered":"<p>This week I am going to complete the discussion of\u00a0Bayesian sample size calculation for a simple clinical trial. Here is the problem,<\/p>\n<p style=\"padding-left: 30px\">a trial\u00a0is to compare\u00a0a corticosteroid cream with a placebo for patients with eczema on their hand. The measurement of response will be the patient\u2019s rating of the severity of their eczema on a 0-10 visual analogue scale (VAS). Patients will only be randomized if their baseline VAS is over 7 and success will be defined as a VAS below 3 after one week. How many patients do we need in the trial?<\/p>\n<p>We suppose that the researcher believes that 50% of patients on the placebo will see enough improvement to be categorised as successful and\u00a0the proportion of successes for the new drug\u00a0will be around\u00a070%\u00a0but any\u00a0level of success\u00a0above 60%\u00a0would be of clinical importance.<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2015\/07\/31\/bayesian-experimental-design\/\">Two weeks ago<\/a>\u00a0we considered the\u00a0sample size calculation based on\u00a0the traditional power-based approach and I\u00a0pointed out some of its limitations. <a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/2015\/08\/07\/bayesian-experimental-design-part-ii\/\">Then in part II<\/a>\u00a0we\u00a0considered the partial Bayesian solution based on the properties of the posterior of\u00a0a measure of benefit.\u00a0As part of that analysis,\u00a0we\u00a0elicited the researcher&#8217;s priors\u00a0for p0, the probability of a successful outcome\u00a0on placebo and the odds ratio \u03b8=[p1\/(1-p1)]\/[p0\/(1-p0)]. 70% success vs 50% translates into an odds ratio of 2.3 and 60% vs 50% is equivalent to an odds ratio of 1.5.<\/p>\n<p>Now I want to create a full Bayesian solution incorporating utility and\u00a0I want to\u00a0implement it in Stata.<\/p>\n<p><strong>Full Bayesian analysis<\/strong><\/p>\n<p>The next discussion that we need to have with the researcher revolves around their reason for conducting the trial and how they will use the results. We need to establish the amount of information that the researcher really needs.<\/p>\n<p>The chances are that the researcher\u00a0wants to know whether or not the cream should be prescribed to future patients. Let\u2019s suppose that they\u00a0say that if the odds ratio, \u03b8, really is over 1.5 then it will be worth recommending the treatment, while if \u03b8&lt;1.5, the cost of the cream will not justify its use. Of course, whatever the design of the experiment,\u00a0the researcher will never know the true value of \u03b8 but rather they will have a posterior distribution for \u03b8.\u00a0So let&#8217;s\u00a0suppose that the researcher says that they will recommend the cream if the posterior probability P(\u03b8&gt;1.5) exceeds 0.9.<\/p>\n<p>This decision threshold acknowledges the possibility of making a mistake; either recommending the cream when really it has no benefit,\u00a0or failing to recommend it when it would have been useful. These errors imply costs that can be contrasted with the benefits of reaching the right decision.<\/p>\n<p>So we need to elicit the researcher\u2019s costs and benefits, sometimes called their utilities. Let\u2019s define the scale by saying that recommending the cream when it is really beneficial gives a utility of 1.0 and failing to recommend the cream when it is really useful gives a utility of zero.<\/p>\n<table style=\"height: 283px\" border=\"1\" width=\"547\">\n<tbody>\n<tr>\n<td width=\"205\"><\/td>\n<td width=\"205\">\n<div>Recommend<\/div>\n<div>P(\u03b8&gt;1.5) exceeds 0.9<\/div>\n<\/td>\n<td width=\"205\">\n<div>\n<p>Fail to recommend<\/p>\n<div>P(\u03b8&gt;1.5) under 0.9<\/div>\n<\/div>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"205\">\n<div>Beneficial<\/div>\n<div>True \u03b8&gt;1.5<\/div>\n<\/td>\n<td width=\"205\">Utility = 1.0<\/td>\n<td width=\"205\">Utility = 0.0<\/td>\n<\/tr>\n<tr>\n<td width=\"205\">\n<div>Not Beneficial<\/div>\n<div>True \u03b8 &lt;1.5<\/div>\n<\/td>\n<td width=\"205\"><\/td>\n<td width=\"205\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The researcher must use this scale to decide on the utility that they want to allocate when the cream is not truly beneficial. I\u2019ll suppose that\u00a0not recommending the cream when it is\u00a0of no\u00a0benefit is judged just as important as recommending it when it is beneficial, but that recommending it when it has no benefit is considered a serious error with a utility of -5.0<\/p>\n<table style=\"height: 283px\" border=\"1\" width=\"547\">\n<tbody>\n<tr>\n<td width=\"205\"><\/td>\n<td width=\"205\">\n<div>Recommend<\/div>\n<div>P(\u03b8&gt;1.5) exceeds 0.9<\/div>\n<\/td>\n<td width=\"205\">\n<div>Fail to recommend<\/div>\n<div>P(\u03b8&gt;1.5) under 0.9<\/div>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"205\">\n<div>Beneficial<\/div>\n<div>True \u03b8&gt;1.5<\/div>\n<\/td>\n<td width=\"205\">Utility = 1.0<\/td>\n<td width=\"205\">Utility = 0.0<\/td>\n<\/tr>\n<tr>\n<td width=\"205\">\n<div>Not Beneficial<\/div>\n<div>True \u03b8 &lt;1.5<\/div>\n<\/td>\n<td width=\"205\">\u00a0Utility=-5.0<\/td>\n<td width=\"205\">\u00a0Utility=1.0<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Now this very simple utility might not be sufficient. Perhaps the researcher would consider that recommending the cream when the true odds ratio is between 1.0 and 1.5 is not such a bad error\u00a0but recommending it when the odds ratio is below 1.0 has a utility below even -5.0. We could expand the table into more categories, or even allow\u00a0the utility to vary continuously with the true value of \u03b8.<\/p>\n<p>In part II we elicited the researcher&#8217;s prior for \u03b8, it was gamma(20,0.115).<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/08\/priortheta-e1438698768546.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-771\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/08\/priortheta-e1438698768546.png\" alt=\"priortheta\" width=\"600\" height=\"439\" \/><\/a><\/p>\n<p>Before the trial there\u00a0are\u00a0no data, so this\u00a0is effectively also\u00a0the researcher&#8217;s\u00a0posterior and\u00a0we can calculate their probability that the odds ratio, \u03b8, is\u00a0over 1.5<\/p>\n<p><strong><span style=\"color: #0000ff\">. di 1- gammap(20,1.5\/0.115)<\/span><\/strong><\/p>\n<p>This gives 0.956 or 95.6%. So\u00a0prior to any trial,\u00a0the researcher is more than 95% sure that the cream is beneficial and by their own decision rule, they would recommend the cream. In that case their\u00a0expected utility is -5*0.044+1.0*0.956=0.736<\/p>\n<p>It we collect data\u00a0from a trial\u00a0we would hope that\u00a0if \u03b8 is truly less than 1.5 the posterior will move to the left and we will not recommend the cream, while if \u03b8 is greater than 1.5 the posterior will move to the right and narrow so that we will\u00a0be even more likely\u00a0to recommend the cream. Hopefully the utility will increase.<\/p>\n<p>Indeed we could continue to recruit more and more subjects and eventually the posterior would just become a shape peak above the true value of \u03b8. At that point we would never make a mistake and the expected utility would become 1.0. The balancing factor that stops us from an ever increasing sample size\u00a0is the cost of adding extra subjects to the trial.<\/p>\n<p>It is not easy to place the cost of recruiting an extra subject on the same scale as the utility of different decisions, but essentially that is what we do implicitly whenever we design a study. We ask ourselves, how do I balance the cost of the trial against the benefit of reaching the right conclusion? Only in most trials\u00a0this balance is not quantified.<\/p>\n<p>Often it helps\u00a0to put\u00a0our utilities\u00a0into monetary terms. Perhaps we could consider the implications of the recommendation that we would make as a result of this trial and decide that a correct recommendation is worth \u00a3100,000. So that, in money, our utilities become<\/p>\n<table style=\"height: 283px\" border=\"1\" width=\"547\">\n<tbody>\n<tr>\n<td width=\"205\"><\/td>\n<td width=\"205\">\n<div>Recommend<\/div>\n<div>P(\u03b8&gt;1.5) exceeds 0.9<\/div>\n<\/td>\n<td width=\"205\">\n<div>Fail to recommend<\/div>\n<div>P(\u03b8&gt;1.5) under 0.9<\/div>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"205\">\n<div>Beneficial<\/div>\n<div>True \u03b8&gt;1.5<\/div>\n<\/td>\n<td width=\"205\">Utility = \u00a3100,000<\/td>\n<td width=\"205\">Utility = \u00a30.0<\/td>\n<\/tr>\n<tr>\n<td width=\"205\">\n<div>Not Beneficial<\/div>\n<div>True \u03b8 &lt;1.5<\/div>\n<\/td>\n<td width=\"205\">Utility=-\u00a3500,000<\/td>\n<td width=\"205\">Utility=\u00a3100,000<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>In money, under the researcher&#8217;s prior\u00a0the expected gain from recommending the cream\u00a0is \u00a373,600 and the expected gain from not recommending the cream would be\u00a0\u00a3100,000*0.044+\u00a30*0.956=\u00a34,400. So before any trial it is very clear to the researcher that they should recommend this cream.<\/p>\n<p>Now let\u2019s suppose that it costs \u00a350 to recruit a patient.<\/p>\n<p>An ideal trial that led to perfect recommendations would offer a benefit of \u00a3100,000. This is a gain of \u00a326,400 over recommending the cream without running a trial. So we could recruit up to 26,400\/50=528 patients into the trial and if the resulting trial improved our\u00a0recommendations\u00a0sufficiently then we would gain over not running a trial at all. However, given\u00a0the researcher&#8217;s\u00a0prior beliefs,\u00a0it would never be sensible to recruit more than\u00a0528 patients. The hope is that between 0 patients and\u00a0528 patients we will find the best design.<\/p>\n<p>Before embarking on a search for the best sample size, let\u2019s try an arbitrary 75 patients in each arm of the study.<\/p>\n<p><span style=\"color: #0000ff\">program logpost <\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 args lnf b<\/span><\/p>\n<p><span style=\"color: #0000ff\">\u00a0\u00a0\u00a0\u00a0 local p0 = `b'[1,1]<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0local or = `b'[1,2]<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0local p1 = `or&#8217;*`p0&#8217;\/(1-`p0&#8217;+`or&#8217;*`p0&#8242;)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0scalar `lnf&#8217; = y0*log(`p0&#8242;)+(n-y0)*log(1-`p0&#8242;)+<\/span><span style=\"color: #0000ff\">\u00a0 y1*log(`p1&#8242;)+(n-y1)*log(1-`p1&#8242;) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0 \u00a0 + 29*log(`p0&#8242;) + 29*log(1-`p0&#8242;) + 19*log(`or&#8217;) &#8211; `or&#8217;\/0.115<\/span><br \/>\n<span style=\"color: #0000ff\">end\u00a0<\/span><\/p>\n<p><span style=\"color: #0000ff\">tempname pf<\/span><br \/>\n<span style=\"color: #0000ff\">postfile `pf&#8217; or pRec using utility.dta, replace<\/span><br \/>\n<span style=\"color: #0000ff\">scalar n = 75<\/span><br \/>\n<span style=\"color: #0000ff\">forvalues isim = 1\/500 {<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 local p0 = rbeta(30,30)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 local or = rgamma(20,0.115)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 local z = `or&#8217;*`p0&#8217;\/(1-`p0&#8242;)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 local p1 = `z&#8217;\/(1+`z&#8217;)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 scalar y0 = rbinomial(n,`p0&#8242;)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 scalar y1 = rbinomial(n,`p1&#8242;)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 matrix b = (0.5, 2.3)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 mcmcrun logpost b using temp.csv, replace \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0 \u00a0 samplers( (mhstrnc , sd(0.25) lb(0) ub(1) )\u00a0 (mhslogn , sd(0.25) )\u00a0 ) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0 \u00a0 param(p0 theta) burn(500) update(2000)<\/span><\/p>\n<p><span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 insheet using temp.csv, clear<\/span><\/p>\n<p><span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 qui count if theta&gt;=1.5<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 local pRec = r(N)\/2000<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 post `pf&#8217; (`or&#8217;) (`pRec&#8217;) <\/span><br \/>\n<span style=\"color: #0000ff\">}<\/span><br \/>\n<span style=\"color: #0000ff\">postclose `pf&#8217;<\/span><br \/>\n<span style=\"color: #0000ff\">use utility.dta, clear<\/span><br \/>\n<span style=\"color: #0000ff\">gen utility = 0<\/span><br \/>\n<span style=\"color: #0000ff\">replace utility =\u00a0 100000 if or &gt;= 1.5 &amp; pRec &gt;= 0.9<\/span><br \/>\n<span style=\"color: #0000ff\">replace utility =\u00a0 100000 if or &lt;\u00a0 1.5 &amp; pRec &lt;\u00a0 0.9<\/span><br \/>\n<span style=\"color: #0000ff\">replace utility = -500000 if or &lt;\u00a0 1.5 &amp; pRec &gt;= 0.9<\/span><br \/>\n<span style=\"color: #0000ff\">replace utility = utility-2*n*50<\/span><br \/>\n<span style=\"color: #0000ff\">ci utility<\/span><\/p>\n<p>In this program we simulate 500 sets of data under the researcher&#8217;s prior and we analyse those data and calculate the posterior probability that the odds ratio is over 1.5; I call this probability pRec\u00a0since it is the probability on which we base\u00a0the decision on whether to recommend the cream.\u00a0Given the true\u00a0value of the odds ratio, pRec and the sample size, n,\u00a0we can calculate the expected utility.<\/p>\n<p>In this case the results were,<\/p>\n<pre>Variable     |\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Obs\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Mean\u00a0\u00a0 Std. Err.\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 [95% Conf. Interval]\r\n-------------+---------------------------------------------------------------\r\nutility      |\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 500\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 69300\u00a0\u00a0 3949.014\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 61541.26\u00a0\u00a0 77058.74<\/pre>\n<p>Since not running the trial at all gave a return of \u00a373,600 this appears to be worse than just using the cream and skipping the experiment, although the CI does not rule out a small gain so perhaps we need to run more than 500 simulations when calculating this expectation.<\/p>\n<p>Of course if the trial were cheaper, say \u00a325 per patient, then the total expected utility from a trial with 75 in each arm would be much closer to the return from\u00a0not running an experiment\u00a0and then we would certainly want to run more than 500 simulations in order to be able to assess whether the trial would be beneficial.<\/p>\n<p>Having established the method for a trial of n=75 in each arm, we could repeat the calculation for a range of sample sizes and choose the best.<\/p>\n<p>In this example I have supposed that the researcher was\u00a0very sure\u00a0beforehand that the cream would show a clinically important benefit, so it is not surprising that an experiment adds very little. The researcher might however argue that although I am convinced that the cream is beneficial, I need to run the trial in order to persuade others who are more sceptical. Here, we have an argument for using a different prior in the analysis to that used to generate the data.<\/p>\n<p>Suppose that a sceptical\u00a0colleague has a prior for the odds ratio that is gamma(10,0.1)<\/p>\n<p><span style=\"color: #0000ff\">. twoway function y=gammaden(10,0.1,0,x) , range(0 3) ytitle(density) xtitle(Odds Ratio)<\/span><\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/08\/scepticalprior-e1438958333248.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-792\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/08\/scepticalprior-e1438958333248.png\" alt=\"scepticalprior\" width=\"600\" height=\"439\" \/><\/a><\/p>\n<p>This person believes that the cream might even be worse than the placebo and they think that the chance of a clinically important benefit (OR&gt;1.5) is small so they would not recommend the cream without further evidence.<\/p>\n<p>So if\u00a0the researcher is\u00a0right and the cream is actually highly beneficial\u00a0the sceptic\u00a0starts with an\u00a0expected utility\u00a0of 0.956*\u00a30+0.044*\u00a3100,000 = \u00a34,400 and a trial is much more likely to increase their utility.<\/p>\n<p>To find how much their utility will increase, we merely re-run the program for the 500 simulations using the researcher&#8217;s prior for the data generation and the sceptical prior for the analysis. So the program for the log-posterior becomes,<\/p>\n<p><span style=\"color: #0000ff\">program logpost\u00a0<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 args lnf b<\/span><\/p>\n<p><span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0local p0 = `b'[1,1]<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0local or = `b'[1,2]<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0local p1 = `or&#8217;*`p0&#8217;\/(1-`p0&#8217;+`or&#8217;*`p0&#8242;)<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0scalar `lnf&#8217; = y0*log(`p0&#8242;)+(n-y0)*log(1-`p0&#8242;)+<\/span><span style=\"color: #0000ff\">\u00a0 y1*log(`p1&#8242;)+(n-y1)*log(1-`p1&#8242;) \/\/\/<\/span><br \/>\n<span style=\"color: #0000ff\">\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0 \u00a0 + 29*log(`p0&#8242;) + 29*log(1-`p0&#8242;) + 10*log(`or&#8217;) &#8211; `or&#8217;\/0.1<\/span><br \/>\n<span style=\"color: #0000ff\">end\u00a0<\/span><\/p>\n<p>Here are the new results for n=75 in each arm of the trial.<\/p>\n<pre>Variable      | Obs Mean Std. Err. [95% Conf. Interval]\r\n -------------+---------------------------------------------\r\n utility      | 500 9800 1330.963 7185.018 12414.98<\/pre>\n<p>The expected utility has jumped from \u00a34,400 to \u00a39,800 so this trial would be worth doing, but perhaps a larger trial would be even more beneficial. Trial and error is probably sufficient to find the best design but I tried something a little more methodical by running 1,000 simulations for each sample size in the range n=200(50)700. Here\u00a0is a plot of\u00a0the results,<\/p>\n<p><a href=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/08\/utilitysamplesize-e1439281775773.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-796\" src=\"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/files\/2015\/08\/utilitysamplesize-e1439281775773.png\" alt=\"utilitysamplesize\" width=\"600\" height=\"439\" \/><\/a><\/p>\n<p>The\u00a0design that offers the best expected return is\u00a0a trial with 300 patients in each arm.<\/p>\n<p>The\u00a0program that created the last plot ran overnight;\u00a0much more\u00a0time consuming than\u00a0the simple\u00a0use of sampsi.\u00a0Yet the biggest overhead is probably going to come from the elicitation of the priors and utilities, especially if the researcher is not used to this approach.<\/p>\n<p>I hope that\u00a0this example has demonstrated\u00a0that the full Bayesian approach to design is much more realistic than a power based analysis and that it takes account of important factors that are simply ignored in a\u00a0traditional sample size\u00a0calculation. As so often happens, a Bayesian analysis makes explicit all of the assumptions that go into\u00a0an analysis\u00a0and as such the method can seem complex and even a little arbitrary. The truth is that those complicating factors do not go away when we run a non-Bayesian analysis, rather they are buried deep in the researcher&#8217;s judgement calls, such as their choice of the significance level and power.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This week I am going to complete the discussion of\u00a0Bayesian sample size calculation for a simple clinical trial. Here is the problem, a trial\u00a0is to compare\u00a0a corticosteroid cream with a placebo for patients with eczema on their hand. The measurement of response will be the patient\u2019s rating of the severity of their eczema on a [&hellip;]<\/p>\n","protected":false},"author":134,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[72,70,75],"class_list":["post-767","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-experimental-design","tag-sample-size","tag-utility"],"_links":{"self":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/767","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/users\/134"}],"replies":[{"embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/comments?post=767"}],"version-history":[{"count":19,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/767\/revisions"}],"predecessor-version":[{"id":819,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/posts\/767\/revisions\/819"}],"wp:attachment":[{"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/media?parent=767"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/categories?post=767"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/staffblogs.le.ac.uk\/bayeswithstata\/wp-json\/wp\/v2\/tags?post=767"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}