This is a summary of the rough road to publication of the paper entitled ``A (giant) void is not mandatory to explain away dark energy with a Lemaitre--Tolman model''

In the following ``we'' means Marie-Noëlle Célérier, Krzysztof Bolejko and Andrzej Krasiński

    This paper had an even more tumultuous history of publication than``Imitating accelerated expansion of the Universe by matter inhomogeneities -- corrections of some misunderstandings''.

    It originated from my ignorance, and more precisely from the following mail I sent to Marie-Noëlle, Krzysztof and Charles Hellaby, on April 10, 2009 (The ``INN paper'' mentioned in the mail is H. Iguchi, T. Nakamura and K. Nakao, Progr. Theor. Phys. 108, 809 (2002)):

    Dear all,

    I have a question to which I am trying to find an answer in the literature, but the papers I looked into are not clear about it.

    The question is: how did the conclusion arise that we must be close to the centre of a giant void if we use the LT model to explain the type Ia supernovae? The argument I saw is this: when the dimming of supernovae is explained by the action of mass inhomogeneities, matter close to us must be speeding away from us faster than more distant matter, which implies a void. (By the way, there is no general agreement as to the size of the void. Krzysztof advocates 1.5 Gpc, but I have seen numbers as small as 400 Mpc, with one paper aiming at lowering this to 200 Mpc - Alexander, Biswas, Notari and Vaid 2008.)

    Perhaps this is so, but the INN paper with which we worked showed that only one free function of the LT model is sufficient to account for the type Ia supernovae. The other one was fixed by hand, with no relation to observational data. That is, they adapted the redshift-distance relation, $D_L(z)$, to observations, which meant adapting the velocity distribution to observational data. But they did not take care about adapting the density distribution to observational constraints - which should mean that one can reproduce the observed $D_L(z)$ with any arbitrary density distribution. If this is true, then where does the void come from? Speeding away from us would have produced a void by today if the initial mass distribution were homogeneous, but we know this need not be so in the LT model.

    I am sorry if this is a silly question. I am going to dig in the literature anyway, but if one of you knows the answer already, then my task will become easier. A reference to the first paper that proposed the giant void would be most useful.


    To this, Marie-Noëlle replied as follows (a few segments excised from this quotation to make it shorter):

    Dear Andrzej,

    Your question is very accurate and raises a point which, in my opinion, is still widely misunderstood in the literature. [...excised...] If one aims at reproducing an apparent accelerated expansion, an easy way to do it is to consider a low-density inner homogeneous region, e. g. a spherically symmetric void, from where the observer (located near the center to account for the "isotropy" of the observations) looks at supernovae located outside the void in a less empty region of the Universe. Since the void expands faster than the outer region, an apparent acceleration is experienced by the observer located inside the void. Such a simple toy model was proposed by K. Tomita MNRAS, 326, 287 (2001) to give an intuitive understanding of how inhomogeneities can mimic an accelerated expansion. Unfortunately, it was too often taken literally by followers, even those who used different LT models. Even among those who tried to reproduce not acceleration but $D_L(z)$ many have been influenced by this void picture which was actually rather efficient at increasing the effect of inhomogeneities but which, in principle, is not mandatory. You are perfectly right when you stress that "only one free function of the LT model is sufficient [...excised...] one can reproduce the observed $D_L(z)$ with any arbitrary density distribution." Thus the huge void is not needed for an arbitrary LT model reproducing only the supernova data.

    [...excised...] contrary to the inverse method which Charles and his collaborators are trying to implement and which is the proper one, the direct method is much more easy to work with. With this method, the parameters to be fitted are not the two functions of r defining the LT model, but a set of very (too) few arbitrary constants which are chosen at will by the authors. Hence the number of different models available in the literature which depend only on the "feeling" of the authors about the best way of taking into account such or such features of the Universe. It is the way these parameters are chosen which implies the appearance of a more or less large void in the solutions. However, your remark (and the widely advocated "huge void") suggested to me we could collaborate to write a paper, which might be a simple letter, about this misunderstanding. I propose to make a first draft I would submit to you. [...excised...]

    Moreover, I am somehow irritated when I see I am credited with this huge void stuff I never put forward. The only thing I did was to give an account of other people's works, but perhaps I was not sufficiently critical with respect to these [...excised...]


    The writing began immediately, and we first aimed at submitting the paper to Astronomy and Astrophysics, where Marie-Noëlle had published her most influential paper some years earlier. The publication might have gone smoothly, but Krzysztof, who was at that time in Cape Town with Charles, told someone about our idea. When his interlocutor heard that we can reproduce the SNIa dimming and the density distribution on our past light cone within one L--T model, he was impressed and suggested to submit the paper to Physical Review Letters instead. We liked the idea -- and this is how the annoying disappointment began.

    Since Phys. Rev. Lett. accepts only short papers, we threw away half or our initial text and we concentrated on showing that the L--T metric can model just one pair of observable distributions on the past light cone of the central observer: the redshift space mass density and angular diameter distance. With quite an effort, we managed to shorten the text to the required 4 print pages -- this remark is important because later some of the PRL referees cynically demanded that we give more details and conclusions. Initially, the paper had a fourth author, Charles Hellaby, but he was later dropped from the list for reasons not related to the publication affair.

    We submitted the paper to PRL on June 3, 2009, that old text is now available from http://arxiv.org/abs/0906.0905v1 .

    On June 25, 2009, we were notified by the editor that our paper was rejected on the basis of just one report of a referee, later called ``referee A''. The editor's letter also said ``Another referee we consulted was not able to review your manuscript.'' I am myself a member of the editorial board of another journal, General Relativity and Gravitation, and in such a situation we seek advice from still one more referee; we never let ourselves rely on just one report. Well, standards differ from journal to journal.

    The report is copied below. It is long and, I am sorry I cannot help saying this, highly ignorant and arrogant. Before copying our reply to it I will give below my short comments to point out the greatest bits of nonsense.

    Here is the report in extenso; the coloured bits are commented on further down:
    (If your web browser cannot open the window below, then click here )




    Here are my short comments. Colours alternate cyclically through red , blue and green .
    (If your web browser cannot open the window below, then click here )





    Of course, we could not send such blunt comments to PRL. A diplomatically formulated reply, sent to the editor on July 10, 2009, is reproduced below.

    Dear Editor,

    Thank you for the referee report. However, we are sorry to say that this report shows that the referee is not an expert on relativity. For example, he/she does not understand that an object in a spacelike relation to an observer is not visible to that observer and demands a ``rigorous justification" of this statement. Elsewhere, he/she demands an ``intuitive explanation" of a result of a solid exact calculation, even though such an explanation is contained in the paper. He/she says ``As far as I know, this has never been shown" about the main result of our paper, as if he/she had not understood what our paper was about.

    Therefore we ask you to appoint a second referee and make sure that he/she is a thorough expert in relativity, not just in astrophysical cosmology.

    Below, in the letter to the referee, we respond to the points he/she has made. As you can see, we believe our wording may have been unclear in one or two places, but otherwise we offer solid replies, and we feel the rejection of our article is not justified.


    Our reply to the referee that accompanied the letter shown above is long and detailed, so I made it accessible only via this link .

    On August 13, 2009, the editor of PRL forwarded to us the reports of two more referees (``B'' and ``C''), of which referee B recommended publication with some minor changes, but referee C, while giving evidence of being competent, found our article not sufficiently interesting or important to deserve publication in PRL. Nevertheless, the editor generously decided to make an exception from the general rules for us, and allow us to modify the paper to satisfy the new referees. Copied below are the editor's letter and both referee reports:
    (If your web browser cannot open the window below, then click here )





    To these reports we replied on August 24, 2009. (Our letter shown below states that we changed the title of the paper. The new title was ``A central underdensity is not mandatory to explain away dark energy with a Lemaitre--Tolman cosmological model''.)

    Dear Editor,

    Thank for your allowing us to resubmit the paper, and thus relaxing a rule of the usual policy of Physical Review Letters.

    We have tried to follow the referees' requests as much as possible. However, some of the requests, especially those of referee C, would force us to go much beyond the PRL length limit of 4 pages. Others would necessitate several time-consuming new rounds of research. We completely agree that the proposed research is interesting and should be done, indeed we know that at least some of it is underway. But we feel it is unreasonable to demand all of that from a single paper, particularly when it is not the main point being made. Solving those same problems in the FLRW paradigm took large numbers of researchers writing many papers.

    Asking that a paper on the large inhomogeneity model should solve all CMB, BAO, etc problems, is, we suggest, a bit like asking that, before a paper using the Concordance model be published, the underlying particle physics be sorted out, or the vacuum energy problem be solved, or the dark energy equation of state and its z-dependence be determined, etc.

    Regarding the appropriateness of this paper for PRL, we point out the following. All agree the dimming of the supernovae is highly topical, and its implications for cosmology and particle physics are of very broad interest. It is widely acknowledged that the popular explanation --- that the cosmic expansion is accelerating --- requires a matter phenomenology that has no natural basis in current particle physics. Under these circumstances, many alternative hypotheses have been put forward. Until a well-founded theoretical description emerges, it is imperative to investigate all alternatives properly. Our paper points out that the large- scale inhomogeneity explanation does not rest on the existence of a void, as is widely believed, and shows how two distinct observational functions can be built into the model using the $\Lambda$CDM functions as an example. This is very relevant to the assessment of the various alternatives.

    The amended paper is attached as a .zip file.


    Our reply to the referee reports was rather long and technical. It is accessible via this link .

    To this, referees B and C gave conflicting responses, so the PRL editors employed one more referee (``D'') to settle the dispute. Referee D decided that our paper is not acceptable. On the basis of these reports the editor of PRL finally decided to reject our paper. Here are the final reports that we received on October 19, 2009:

    ----------------------------------------------------------------------
    Second Report of Referee B -- LF11967/Celerier
    ----------------------------------------------------------------------

    The authors have satisfactorily addressed most issues raised in the first referee report.

    As a minor point, on page 1, paragraph 3, last sentence, the authors write that the L-T model is viewed as a first approximation with angular inhomogeneities smoothed out, and the central location of the observer is therefore an artifact of the smoothing and does not contradict ``any cosmological principle'' (presumably the authors are referring to ``the cosmological principle''). One could make this argument if the smoothing was only observational (and thus made after the position of the observer is fixed). However, in the L-T model the spherical symmetry enters the dynamics directly, so placing the observer in the centre is indeed a violation of the cosmological principle. The argument that the L-T model is only an approximation is of course still valid.

    Provided that the authors remove the last sentence of page 1, paragraph 3, the manuscript is suitable for publishing in Physical Review Letters.

    ----------------------------------------------------------------------
    Second Report of Referee C -- LF11967/Celerier
    ----------------------------------------------------------------------

    The authors did not address my concerns sufficiently to change my original opinion. As I had originally indicated, although the work is publishable in another journal, it in my opinion does not pass the importance criteria of PRL articles.

    ----------------------------------------------------------------------
    Report of Referee D -- LF11967/Celerier
    ----------------------------------------------------------------------

    This paper argues that Lemaitre-Tolman-Bondi (LTB; L-T in the paper) cosmological models can explain the apparent acceleration of the Universe (as observed through supernovae) without placing the observer in an underdense region. While the paper appears to be mathematically valid, I do not believe it should be published in PRL, for the following reasons.

    1. The basic result (that a much wider space of possible density profiles is opened up if the Big Bang is not required to be simultaneous) seems trivial. The paper might be interesting if it presented a new method of constructing LTB universes given a set of constraints, but in fact the methodological content is already in Ref. [15].

    2. A paper that used already-known equations to show a new type of physical effect or give new insight into their structure might also be interesting. However the paper appears to merely plug some simple functions into the Ref. [15] equations.

    3. Even if there is no new mathematical result, this paper would still be interesting if it presented a viable class of cosmological models distinct from "standard" $\Lambda$CDM or commonly discussed alternatives. However, the authors have not endeavored to show even qualitative consistency of their models with other standard tests of $\Lambda$CDM (e.g. the microwave background, large scale structure) or with tests that constrain radially inhomogeneous models (e.g. kinetic SZ effect).

    4. One would expect that the early history of the Universe (BBN, recombination, structure formation) should be very strongly affected by having nonuniform bang time, since perturbations in $t_B$ correspond to a "decaying mode" that should have been nonlinear at early times. I am disappointed that the authors have not attempted even a very rough argument as to why this might be observationally acceptable.

    5. The authors describe the "galaxy number counts" as m(z)n(z) -- but the true observable (which, sadly, we cannot yet predict from theory) is n(z). Therefore it is not of interest to fix m(z)n(z) to the $\Lambda$CDM value.


    Well, that was it. We had nothing more to expect from PRL. Only referee B took us seriously, having read and understood our message. The other referees repeatedly ignored the PRL publication rules and put demands on us that were impossible to fulfil within 4 pages. Referees A and D said things that were mutually contradictory. For referee A, Mustapha, Hellaby and Ellis did a sloppy work and simply citing them we committed a sin, for referee D they solved the problem, thus making our present paper trivial and needless.

    I felt trampled over by a mob. We knew we had a valid message, but the ignorant mob is always stronger and always prevails. This unfortunate story shows what kind of science astrophysics is. For us, MNC, KB and AK, our home turf is relativity, where objective mathematical and logical criteria rule exclusively. A genuine science is one that has (1) a well-defined subject of research, (2) generally accepted objective criteria for distinguishing between truth and error or fraud. Astrophysics passes test (1), but not (2); the above-mentioned contradiction between referees A and D is an example. Having a paper accepted or rejected is just a lottery, the result of which depends on who happens to be the referee. Quite often, in disputes, it is more important who made a statement than whether he (she) was right. In consequence, it is extremely difficult to discuss with such individuals using rational mathematical arguments -- because they do not understand such language.

    Take our referee A as an example -- he does not understand that a spacelike relation between events P and Q means that P cannot observe Q and vice versa. He "highly doubts" this and demands a "rigorous justification", not knowing that Einstein gave one in 1905. How can one discuss with such an opponent? Deliver to him an instant relativity course? He will be bored after 5 minutes and say something like "you must feel the natural phenomena, and not be a slave of formalisms". Actually, referee A said something similar: "they have not made it clear physically why an overdensity should be able to have the same effect. Simply doing a numerical calculation and calling it a day isn't sufficient -- the authors need to explain their findings." An intuitive explanation is, of course, helpful whenever it can be given, but to find one is sometimes a separate difficult problem. Just recall how long it took relativists to understand that the simple Schwarzschild solution, found in 1915, predicts the existence of black holes and what properties black holes have.

    The obvious decision for us was to go back to Astronomy and Astrophysics. We did so and submitted our paper (without Charles in the author team) on November 2, 2009. Again, we had a long odyssey with the referee (just one this time) ahead of us. He was ignorant about several things in basic relativity, but was more flexible than the Great (and rigorous!) Scientists at PRL. In his first report he qualified our paper as publishable with several corrections. Some of his demands were acceptable, even if not necessary, but some others were definitely nonsense. However, he adopted a peculiar tactic: with every round of correspondence he would forget about some of his demands, apparently taking note of the fact that he was wrong, but without admitting it openly. In this diplomatic way we finally steered our paper (somewhat, but not harmfully, shortened and modified) through the rough seas to the safe haven. In the end, we are grateful to this referee for being open to negotiations.

    The first report came back on December 21, 2009, and is copied below. Of the issues raised in it, that of ``gauge dependence'' was the greatest nonsense, and the one most persistently defended by the referee. The discussion cannot be described in short, so if this text has any reader, he/she will have to read the correspondence to appreciate the torture we went through.

    This paper discusses the idea of using a Lemaitre-Tolman (L-T) solution to model the universe. The authors fit the two free functions in these models by matching observables with their corresponding values in a $\Lambda$CDM FRW cosmology. This is achieved using a method devised by Mustapha et al. The main claim of the paper is then that the space-time that results should be described as having a ``giant hump'' in the density profile, rather than a ``giant void''.

    The mathematics in the paper all appears to be correct (it is largely reproduced from the previous work of Mustapha et al), and I can see no errors in the manuscript of this kind. The interpretation of the mathematical results, however, seems less clear, and there are some issues that need to be addressed and clarified. As the main claims of the paper involve the interpretation of mathematical results, this seems to be an important issue. I will outline my concerns below.

    My one principle problem with this paper is the highly gauge dependent nature of the quantities that are being discussed. The authors wish to describe their mathematical results as a local over-density in the energy density on a space-like hypersurface of constant time. They try to make clear at numerous places in the text that this is different to a ``giant void'' (i.e. an under-density in the energy density). However, these statements are gauge dependent, and it seems to me that one could very easily describe the same space-time as either a ``void'' or a ``hump'', simply by choosing different gauges. This ambiguity is not discussed in the manuscript at all, and appears to me to be of central importance to the claims they are making. As the authors state in the manuscript, neither an under-density nor an over-density on a space-like hypersurface is directly observable, and so I see no reason to favor one description over the other.

    Also, the authors make much of the description ``giant void'', which has been used in numerous previous papers to describe the L-T models that have resulted from various considerations. They appear to consider it an inaccurate description of L-T models that can reproduce $\Lambda$CDM observations. However, the previous authors who have used the description ``giant void'' have often worked in a different gauge to the one that is used by the present authors. That two different analyses lead to two different energy density profiles, in two different gauges, is not particularly surprising. To make a meaningful comparison the authors should transform their results into the gauge choice made by the authors who use the description ``giant void'' (or vice versa).

    What is more, any attempt at a preferred description of a quantity on a surface of ``t=now'' requires a reasonable definition of ``now''. The most sensible description would appear to me to be given by insisting that the interval along a curve of a comoving fluid element should be given by a set amount, otherwise the universe appears to have a different `age' at different spatial locations. (The authors raise an objection in their text to describing $t-t_B$ as the age of the Universe, due to the neglect of radiation. In terms of the L-T model itself, however, it seems like an appropriate description). This corresponds to a particular gauge in which $t_B=$constant, which is the case in many of the papers who use the description ``void''. The present authors use a different gauge, and so their hypersurface ``t=now'' has fluid elements of different ages. Comparing the energy density in different spatial regions of the Universe with different ages will naturally give different results to comparing regions when they have the same age. For example, one could consider an FRW model in a gauge in which space-like hypersurfaces have non-constant density. Voids and humps would then appear on hypersurfaces of constant time, and whether a region is described as a void or a hump would be sensitive to the exact choice of gauge. This does not make either the description ``void'' or ``hump'' more appropriate for any given spatial region, and in the more sensible constant bang time gauge there is none of either.

    I consider the issues above to be very important for the manuscript under consideration.

    I also have a number of other more minor points relating to interpretation and presentation. In the text there are numerous issues linked to those I have outlined above, but I will not push these points any further. Instead, I will try and outline the other concerns that I have, in the order they appear in the article (not necessarily in order of importance).

    In the introduction the authors say ``We do not claim we are living at or near the centre of any spherically symmetric universe.'' The rest of the paper then goes on to describe a global L-T space-time. Such models appear to put the observer in a very special place, contrary to the statements made in this paragraph.

    Later in the introduction the authors write ``Why, then, has this information been overlooked and several researchers have been led astray by the frequent claims of a giant void being implied by an L--T model? We suspect the reason might be twofold.'' They then write about half a page on the alleged existence of the Hubble bubble and suggest this is why previous authors have found a ``giant void'' when considering L-T models. I think this is quite unlikely. Most of the papers that describe giant voids are the results of fitting to data (often with a priori constraints on the initial data, as the authors rightly point out). However, I very much doubt these initial constraints were constructed with the intention of making a giant void. They appear to be done for simplicity. If the data had suggested a giant hump after these initial simplifications I expect it would have been reported by most (if not all) of these studies. These comments therefore seem inappropriate to me.

    I do not see the need for Fig.1. The L-T model in the figure is not described anywhere that I can see, and showing that an anonymous model gives different results to $\Lambda$CDM seems unnecessary (it doesn't seem to add anything beyond the statements made in the text).

    The authors make clear they are trying to reproduce $\Lambda$CDM observables in an L-T model with no $\Lambda$. This is not the same thing as reconstructing the L-T functions from observable data, however, even if $\Lambda$CDM is consistent with all observations. This is hinted at by the authors in various places, but not made very clear. This seems especially important as the observables the authors consider have not yet been observed out to the redshifts they consider. Number counts, for example, as far as I am aware are only generally considered reliable out to redshifts considerably less than 1. Looking at Fig. 10 out to such distances gives a very different picture to considering much larger (unobserved) distances: In fact, it would appear to suggest a local void rather than a hump.

    In the results sections the authors implement a special procedure to deal with the apparent horizon. I have no problems with this procedure, and see why it is necessary. However, it breaks up the flow of the paper a little and I would suggest to the authors that they consider putting it in an appendix instead (this is very much an issue of style only, and I only mention it as a suggestion to improve the flow of the text).

    Figures 2 and 3 seem unnecessary. The same quantities, with different arguments, seem to appear later in Figures 5 and 6. I don't see why they need to be displayed twice. There are also some oddities with these figures. The x-axis is labelled with a coordinate distance in units of proper distance. The `scale factor' is now a function of r, so shouldn't the proper distance be a non-linear function of r? Also, the quantities being plotted could be more usefully displayed as -2 E/r2 and M/r3, as they would then have straightforward interpretations in terms of FRW quantities (which is what many readers will be interested in). Removing the r2 and r3 factors would probably make the plots more useful also, as at present they look very much like simply r2 and r3 only. (I, for one, would also be interested to see if the spatial curvature at the centre of symmetry is higher or lower than the asymptotic value. Such a rescaling of variables may make this more apparent). These comments can be applied to relevant later plots as well.

    The authors write just before the start of Section 3.2 the following: ``Finally, as seen from Fig. 10, the current density profile does not exhibit a giant void shape. Instead, it suggests that the universe smoothed out around us with respect to directions is overdense in our vicinity up to Gpc-scales.'' It seems worth pointing out that at present observables do not extend as far as the authors have plotted these quantities. In fact, if the graph were restricted to the region covered by current observation it would show a void rather than a hump. The existence of the giant hump that the authors describe does not, therefore, seem to be a consequence of current observations, but rather of their expectation that future observations out to larger redshifts will follow the $\Lambda$CDM prediction. This seems worth making explicit.

    In the Discussion section the authors write: ``As we said earlier in this paper, the belief that an L--T model fitted to supernova Ia observations necessarily implies the existence of a giant void with us at the centre was created by needlessly, arbitrarily and artificially limiting the generality of the model.'' I do not consider these choices to be needless or arbitrary, although they do limit generality. It seems to me that the previous work that is being described by these words often has a considerably different aim to the present paper: They are attempting something closer to hypothesis testing (in a Bayesian sense, often). To try and compare a model with arbitrarily many free parameters (i.e. free function) to a model with one constant would results in the L-T models being dismissed as highly improbable. By parameterizing the functions in some way the number of constants to be fitted is then reduced to a workable number, making the proposed (less general) model much more favorable. In this sense, limiting generality is a very sensible, if restrictive, thing to do. I understand the ambiguity in this kind of reasoning, but certainly do not consider it needless (for the goals mentioned above). These comments could be applied to earlier discussion, too.

    Figure 20 shows a giant void model and the authors' giant hump model. I can't see what model the giant void is referring to though. It would be helpful to mention this in the caption and the text (if it is not already, and I can't see it). It's probably also worth mentioning that (I expect) the giant void model was fitted to data at low z, and not out to z=4. This would make the (unobserved) difference with the giant hump model more understandable.

    The authors go on to write: ``Thus, while dealing with an L--T (or any inhomogeneous) model, one must forget all Robertson--Walker inspired prejudices and expectations.'' This seems a bit too much, as FRW cosmology is very useful for understanding lots of aspects of more general cosmologies. I would suggest ``be cautious when applying'', rather than ``forget all''.

    Shortly afterwards the authors write: ``This putative opposition can then give rise to the expectation that more, and more detailed, observations will be able to tell us which one to reject. In truth, there is no opposition.'' This may not be strictly true. Observations of the kSZ effect, and the growth of linear structure may give effects in L-T models that cannot be easily reproduced by FRW. For the kSZ effect see

    arXiv:0807.1326 Title: Looking the void in the eyes - the kSZ effect in LTB models Authors: Juan Garcia-Bellido, Troels Haugboelle

    And for linear structure see

    arXiv:0903.5040 Title: Perturbation Theory in Lemaitre-Tolman-Bondi Cosmology Authors: Chris Clarkson, Timothy Clifton, Sean February.

    These effects seem worth mentioning.

    Later the authors write: ``Thus, if the Friedmann models, $\Lambda$CDM among them, are considered good enough for cosmology, then the L--T models can only be better''. This is true in terms of reproducing some observations, but for hypothesis testing they could be disfavoured (see comments above on hypothesis testing).

    Shortly afterwards the authors then write: ``the question that should be asked is `what limitations on the arbitrary functions in the model do our observations impose?' rather than `which model better describes a given situation: a homogeneous one of the FLRW family, or an inhomogeneous one?'''. In terms of trying to reproduce $\Lambda$CDM observations with an L-T model this may be true (although see comments above). But in terms of trying to fit the two models to observations it seems perfectly plausible to ask which model better describes them: The L-T model will probably usually fit better, but it is perfectly possible that $\Lambda$CDM could in the future be shown to be inconsistent with observations, or more favorable, in a Bayesian sense.

    Finally, the last paragraph seems contrary to the rest of the paper. The paper up until this point has been about reconstructing the L-T functions by matching observables to their expected values in $\Lambda$CDM. If this is the programme, then there is only going to be one result -- the model the authors have found. In this case there seems no point comparing to any other model, as it will not reproduce $\Lambda$CDM as well. Presuming that future observations are in keeping with FRW, the giant hump model will fit better. Of course, future observations may turn out to be in conflict with $\Lambda$CDM, but this does not seem to be what this paragraph is considering.

    The list of references also seems to be missing a few recent publications that should probably be included for completeness. These are:

    arXiv:0807.1326 Title: Looking the void in the eyes - the kSZ effect in LTB models Authors: Juan Garcia-Bellido, Troels Haugboelle

    arXiv:0903.5040 Title: Perturbation Theory in Lemaitre-Tolman-Bondi Cosmology Authors: Chris Clarkson, Timothy Clifton, Sean February.

    arXiv:0810.4939 Title: The radial BAO scale and Cosmic Shear, a new observable for Inhomogeneous Cosmologies Authors: Juan Garcia-Bellido, Troels Haugboelle

    arXiv:0909.1479 Title: Rendering Dark Energy Void Authors: Sean February, Julien Larena, Mathew Smith, Chris Clarkson

    arXiv:0807.1443 Title: Living in a Void: Testing the Copernican Principle with Distant Supernovae Authors: Timothy Clifton, Pedro G. Ferreira, Kate Land

    arXiv:0809.3761 Title: Can we avoid dark energy? Authors: J. P. Zibin, A. Moss, D. Scott

    arXiv:0902.1313 Title: What the small angle CMB really tells us about the curvature of the Universe Authors: Timothy Clifton, Pedro G. Ferreira, Joe Zuntz

    The last of these seems particularly relevant, as it also finds that an inhomogeneous bang time is necessary to fit to cosmological observables.

    If the authors were to cut back on some of the Figures, and reduce the discussion in the introduction, I expect the paper could be reduced in size by as much as 25\% without losing any scientific content.


    We decided to satisfy the referee in all points where his demands were not nonsense, but where he was objectively wrong we had to take the risk and try to educate him a bit about relativity. Hence, unfortunately, our response (sent to AA on January 12, 2010) was even longer than his writeup, so we do not quote it in this text. It is accessible via this link .

    The second referee report reached us on January 31, 2010. It was considerably shorter, but the referee persisted in pushing the main bit of nonsense: his insistence that what we did was "gauge dependent". His other points turned out in the end to be just a nuisance, from which he partly backed off. Here is the second report:

    The authors have made some attempts to address the points I raised in my first report, which have improved the paper, but I consider some of their responses to be unsatisfactory. In particular, they have not addressed what I consider to be the principle problem with this paper. I will reiterate the outstanding points below, and why I am dissatisfied by the authors' responses.

    My principle objection to the paper was that their main result is gauge dependent (dependent on the foliation of space-time with hypersurfaces of constant t), and that the authors do not discuss this at any point, do not consider any other gauges (making it impossible to tell to what extent their result is general, or only true in the one gauge they consider), and do not consider what I would consider to be the most appropriate gauge (if such a thing exists at all): One in which they compare the energy density in different regions when they are at the same age (proper time from the initial singularity, as measured along the world-lines of the dust particles).

    I will now address the authors' reply on this point. I am aware that it is usual to describe spherically symmetric dust solutions in a comoving and synchronous gauge, and that once this choice has been made there is only one coordinate freedom left to redefine $r->f(r)$. However, this does not mean that a synchronous and comoving gauge is a unique foliation, and just because it is convenient, and often used, does not make it more fundamental in any way. The authors may have spelt this out for me as I suggested they were not making a fair comparison with the previous works they reference which find a `giant void' when they considered a universe with constant bang time. My point was that in these previous studies the coordinate system that was used satisfied the following conditions simultaneously:

    (1) It was comoving and synchronous. (2) It had surfaces of constant t at which each point in space has the same age (as defined above).

    The authors considered the case when (1) is maintained, when they give up a constant bang time, but (2) is not. This is all well and good, but it is a choice that is likely to have significant consequences for the results they claim. If instead they had chosen a gauge in which (2) was maintained (as I suggested they consider in my previous report), and (1) was not, then they may have come to very different conclusions. It may be the case, of course, that the interpretation of a `giant hump' exists in both approaches, but unfortunately it is impossible to tell from their current analysis. That maintaining (2) results in off diagonal components to the metric, or to space components for $u^a$, is of very little concern. Constructing an energy density profile by comparing the energy density of different regions of space when they are at different ages seems considerably more troubling to me, especially if it is presented without any discussion of gauge dependence.

    I will now move on to the other points.

    I had previously objected to the authors use of the phrase `We do not claim we are living at or near the centre of any spherically symmetric universe', as they put us at the centre of a large spherically symmetric region. The authors objected to my use of the word `global' in my description of the L-T region they consider. They are technically correct, the L-T region they consider is not necessarily global, but they do consider an L-T region that goes out to at least as far as z=4, which is a significant fraction of the entire observable universe. My point that they are, in fact, putting us in a special place in the universe (contrary to their statement quoted above) therefore seems entirely appropriate.

    I repeat my objection to the authors' implied claim that previous work on this subject has been biased so as to produce a void model, rather than a local over-density. They have provided some quotes from the papers they cite, but these do not in my opinion go far enough as to be considered support for what is, if true, a very serious claim indeed. It seems like most of these previous papers have used the Hubble bubble (or other similar ideas) simply as motivation for why a very large structure might be considered plausible. That they describe their model as a void from early on in their papers is understandable if this is what they found from their analysis. It is a large leap to suggest from this that they biased their own results in order to find a void.

    I mentioned a couple of times in my previous report that the authors should make clear the consequences of trying to reproduce $\Lambda$CDM, rather than fitting to observables. In response they added a sentence at the end of section 1 which adds nothing, and may as well be deleted. My point was that current observables do not extend anywhere near as far as they are being extrapolating to, and that if they consider them out to the ranges in which they are measured (z < 1) then their results would imply a local under-density rather than an over-density. They have replied that the local under-density they find is smaller than the `giant voids' that have been suggested by some, but this was not my point. My point is that their conjecture of a `giant hump' relies on the assumption that observables that have not been measured will follow $\Lambda$CDM out to high redshifts. If they consider only the range of z where observations have been made then no giant hump is apparent. There is, in fact, an under density (which could be described as a void, even if it is less deep than the giant voids found by some). That they are reproducing $\Lambda$CDM rather than fitting to observables is also a non-trivial operation even in the region where observables have been measured. A discussion of errors, and how close they need to be to $\Lambda$CDM, would seem appropriate.

    Moving on to the discussion, I made the point in my last report that, contrary to their statements, there are observations that can be made to distinguish L-T from FRW. The authors' reply appears to indicate that they were trying to say that FRW observables can always be reproduced in L-T space-times, as FRW is a limit of L-T. Such a statement does not need to made (it is obvious), and seems out of place in a paper that is otherwise trying to reconcile L-T space-times without $\Lambda$ with FRW space-times with $\Lambda$. To say that L-T with $\Lambda$ can reproduce FRW with $\Lambda$ seems unnecessary, and, given the context of the rest of the paper, open to misinterpretation.

    The authors' replies about hypothesis testing, and being able to distinguish between FRW and L-T, also seem misguided. That 'astronomers had better invent a method to deal with models that contain arbitrary functions' shows a misunderstanding of what is trying to be achieved: A discriminator of whether it is worth introducing extra degrees of freedom to explain observations, or whether a simpler model that fits less well is more likely to be true. Constructing arbitrarily complicated models is almost always possible to get better fits to data -- the question is whether it is worth the extra complication. Their comments about FRW also show a misunderstanding. There are those in the cosmology community who believe that FRW is not only a good average description of the Universe, but that space-time (in our observable universe) can be described as exactly FRW, up to linear perturbations, at all points (except black holes). Such a view can be disproven, and making a comparison of this application of FRW with L-T does seem possible.


    This is how we replied to the above, on February 3, 2010:

    1. On "gauge dependence"
    -----------------------------------
    The comoving and synchronous (CS) foliation in any L-T model IS unique. It is uniquely determined by the flow lines of matter in the spacetime, which form a geometric, coordinate-independent structure. This foliation consists of spaces that are orthogonal to all the flow lines. After the CS coordinates (t, r) are chosen, they are determined up to the transformations t = t' + constant, r = f(r'), where f is an arbitrary function. These transformations just relabel the spaces of constant t and the flow lines of matter, but the spaces are not thereby transformed in any way.

    The referee's conditions (1) and (2), which he/she says were fulfilled in the previous works, are not properties of the coordinate system alone. They can hold simultaneously only when the bang-time function t_B is constant. This is a coordinate-independent geometric property OF THE SPACETIME: when t_B is not constant, conditions (1) and (2) are mutually exclusive and can never hold simultaneously.

    When condition (2) is maintained, but (1) is not, spaces of constant time t' have no clear physical meaning apart from being geodesically parallel to the Big Bang set. Contrary to what the referee seems to suggest, not a single paper on the L-T model has ever been published that would use such coordinates with non-constant t_B. Consequently, even if we had calculated the density distribution in a constant t' hypersurface (which is numerically possible), we could not compare it to any other author's result because such results do not exist in the literature. ALL authors of papers on astrophysical interpretation of the L-T model worked in the CS coordinates, and in this sense nothing that was ever said about this model is "gauge dependent".

    2. On our special position in the Universe.
    --------------------------------------------------------

    We did put the observer at the centre of a large L-T region. But there are many such regions in the whole Universe - possibly infinitely many, as in the k < 0 and k = 0 Friedmann universes. Thus there is no reason to claim that our observer occupies a special position in the whole Universe. The referee picked up one sentence for his/her criticism, while we believe the whole paragraph containing that sentence makes our point clear and does not put it in opposition to the rest of the paper, which was the original objection of the referee.

    3. On the "Hubble bubble"
    ----------------------------------

    Our intention was to imply the same thing that the referee says: ``...most of these previous papers have used the Hubble bubble (or other similar ideas) simply as motivation for why a very large structure might be considered plausible''. Only a few papers that we explicitly cite indeed represented the view that a large void objectively exists. Even so, we did not use the word ``biased'' in this context, so we think the referee's objection is excessively strong. (N. B. we used the word ``bias'' only once in the whole paper - we verified this using a text-editor - and it was in section 3, in connection with quite another matter.)

    4. On whether the giant hump is apparent
    -------------------------------------------------------
    Our paper was meant to be an example showing that a structure very different from a giant void may be obtained if the L-T model is employed in its full generality. We never intended to claim (and we explicitly said so) that a hump is a necessary consequence. So we do not see the need to do what the referee is trying to force us to do, namely ``A discussion of errors, and how close they need to be to \LambdaCDM''. Our sole thesis is contained in the title of the paper: the giant void is not a unique implication of models describing the SNIa observations.

    The referee did not take into account what we said in the first reply: even if we limit the graphs in Figs. 10 and 19 to the z < 1 region, they will clearly show an overdensity rather than void. In both figures, the observer is at the centre of a small-scale minimum of density, but the value of this minimum nearly equals the large-scale cosmic average, and the density goes up to values definitely above the average already at small distances. Thus, an observer who knows the cosmic average density, if she would see our type of distribution in her vicinity, she would immediately know she sits in a local minimum on top of a larger-scale overdensity. Such a configuration cannot be called a void.

    5. On "hypothesis testing"
    ----------------------------------
    We can only repeat what we said before: if an L-T model with an a priori limited generality is compared to FLRW and found inferior to the latter, then we have eliminated only this particular subcase of the L-T model. It is thus incorrect to say that in this way the whole L-T class was proven inferior - but this is how some authors had dealt with this problem.

    -----------------------------------------------

    Except for point 1, where the referee makes a definitely erroneous claim, all the other points concern topics which are not of primary importance for our main thesis. It seems the referee puts excessive emphasis on them and thereby creates a conflict where it does not really exist. In fact, the present discussion is no longer directly connected to the subject of our paper, but rather to the differences between our intended interpretation of some of our statements and the referee's way of understanding our wording. We suggest that rejecting our paper on the basis of such differences would not be fair.


    Our referee was not satisfied with these answers and explanations. He kept pushing his claim about ``gauge dependence'', taking it to an even higher level of nonsense. His way of pushing this question showed a self-assurance typical of an astrophysicist who learned physics by reading encyclopaedia entries (hopefully, it was not Wikipedia). Here is his third report, received on February 11, 2010:

    The authors have not addressed the points I raised in my previous report. I will comment on their reply below.

    1. On "gauge dependence"
    -----------------------------------

    We are obviously misunderstanding each other. I will try to clarify. I assume that the authors recognise that general relativity is diffeomorphism covariant, and that in this sense there is no such thing as a preferred foliation. The hypersurface they take to record the energy density on must therefore be considered a choice. So far, there can surely be no disagreement.

    My complaint has been that they are presenting their result of finding a "giant hump" as if it were not foliation dependent, when it most certainly is (see above.) We can argue about whether a CS foliation is the best way to record the energy density, but they must surely agree that if they chose a non-CS foliation (as they are completely at liberty to do) then their results are going to change. It is necessary to state that this is the case (that they work in CS gauge, and that the interpretation of a "giant hump", or otherwise, depends on this). I hope this is not controversial. The authors may consider it obvious, but it needs to be said.

    What appears to be the more controversial suggestion I made was that the authors should consider a different gauge to see if their results hold more generally, or if it depends on a CS foliation. For this purpose I suggested a gauge where each space-like region has the same age. As I've tried to explain, this seems very reasonable to me, and easy to achieve. All that needs to be done is to use their existing best fit solution and plot \rho(t_1+t_b(r),r), where t_1 is a constant. I do not doubt that most other studies of L-T use a CS gauge, and only a CS gauge, but most other studies do not make as their central claim a strongly gauge dependent result (in the sense of the first paragraph above.) This is easy to do, and will, I believe, make what is going on clearer. (I suspect that the appearance of a "giant hump" may well be due to the fact that they are comparing the energy density of the central region when it is considerably younger than the outer regions, but cannot tell given their current analysis.)

    [The authors do not need to remind me again that once comoving and synchronous coordinates have been specified then the foliation is unique, that most people use comoving and synchornous coordinates, that in comoving and synchronous gauge the coordinates are specified up to a redefinition of r, or that when t_B is not constant then there does not exist a CS foliation with hypersurfaces of constant age. I am well aware of these things.]

    2. On our special position in the Universe.
    ---------------------------------------------------

    In the paragraph in question the authors are clearly trying to argue that they are not putting the observer in a special place, as this is the reason for the paragraph. That other L-T regions can exist outside of the observable universe does not mean the observer at the centre of one of these regions is not in a special place. The paragraph is misleading as the rest of the paper considers and observer at (or very near) the centre of a Gpc sized void. This must be considered special by anyone's standards. The paragraph in question therefore seems misleading: The observer is in a special place.

    3. On the "Hubble bubble"
    ----------------------------------

    The long passage on the Hubble bubble, Tomita etc. follows the authors questioning why others have been "led astray by the frequent claims of a giant void being implied." It seems clear that they are implying a sociological bias is responsible for this. That they do not explicitly use the word bias is beside the point.

    Having said this, I have now read this section again, and am unsure whether the authors are implying the authors who found a void have been led astray, or the readers of their papers have been led astray. I had previously thought they meant the former (which their reply would also seem to indicate.) If this is the case, then it seems highly innappropriate to write this in an article without better evidence. If it is the latter, I recommend the authors make it clearer that this is the case. (Even if it is the latter, it still seems to me to be a bad idea to be quite so derogatory, and that the paper would benefit from these comments being removed.)

    4. On whether the giant hump is apparent
    ---------------------------------------------------

    Supernovae observations extend to about z \approx 1. Reliable number counts from, say, SDSS extend to about z \approx 0.1. Out to z=0.1 it seems clear that the energy density is increasing as a function of r. z=1 usually corresponds to about r=4Gpc, which would appear to be just after the maximum density. In any case, errors are large at z=1, so it seems unlikely a maximum would be apparent even if real observations out to z=1 were used. Using the observables the authors consider, and current data, I therefore expect the central observer would reconstruct an energy density that is increasing as a function of r, not decreasing. The authors cannot be sure of the contrary unless they actually do the analysis with real data.

    It therefore seems at least a possibility (and in my opinion likely) that a giant hump is a result of extrapolating observables in the expectation they will follow LCDM, rather than the result of any real data. This seems worth mentioning, so readers are not inadvertently mislead. I do not see why the authors should refuse to at least discuss what real observables mean for their study, even if they do not want to get their hands dirty with data. At least some readers will be interested in this.

    [With only supernova observations and number counts I do not see how an observer would be able to tell if their local environment is above or below the cosmic average density (whatever that means in an inhomogeneous model). Surely they can only determine if it is the case that energy density increases or decreases away from them up to the range their observations cover. Anything could happen at greater distances.]

    5. On "hypothesis testing"
    -----------------------------------------------

    I agree with the statements the authors made in reply here. The reason for the discussion of hypothesis testing was with regard to issues raised in my first report. If they agree with what I have written on hypothesis testing (which is very mainstream), then I hope they will address the issues I raised in that report.

    I also hope I have clarified what I have been trying to communicate about gauge more effectively. It is of central importance to the claims the authors are making. The other points are not trivial either, and I consider it the job of a referee to raise points they consider to be misleading or innappropriate. I don't see how simply raising and discussing these issues can be considered as "excessive emphasis."


    Faced with such stubbornness we had to resort to one more attempt at educating the referee. Part of that education effort was going along with the referee's demand to produce graphs of density on hypersurfaces of constant time. They did not confirm his expectations -- see Figs. 10 and 19 in the final version, available from http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.0905v5.pdf -- they, too, showed a giant hump of a similar shape, though with some quantitative differences. Hence, our next reply is again very long. It contains an extended explanation of what it really means to be "gauge" (i.e. coordinate) dependent. Also, we had a clever idea with the other points of the referee (from the beginning they seemed to us to be a needless nuisance anyway). Namely, we asked him for suggestions on exactly what wording we should use in the passages that he questioned. This tactic turned out to lead to a final solution quite quickly. Our third reply to the referee, sent to AA on March 3, 2010, can be read here .

    To this, we obtained a reply on March 23, 2010. The referee gracefully backed off from some of his unreasonable demands without admitting he did back off. His report 4 is enclosed below. He did fulfil our request and suggested a wording for the passages he questioned (and, let us stress it, very gallantly added that we are not obliged to use his words verbatim). To me, his suggested pieces say exactly the same things as our original pieces, only with use of somewhat different words. But never mind, just because of this similarity they were perfectly acceptable. Here is the referee's report 4:

    The authors have made a significant step to address my main concern with their paper by including the energy density contrast on a hypersurface of constant age in Figures 10 and 19. I will therefore report again on their paper. As before, I will address comments to the outstanding points in previous reports.

    At points in their last reply the authors' have asked for ``hints'' on how to address my concerns. To help them implement what I have been suggesting, I will therefore write some example text at various points in the report below. This is intended to suggest to them how the point could be addressed, and is not meant to be prescriptive or dogmatic.

    1. On ``gauge dependence''
    ---------------------------------------------

    The authors have taken every opportunity to misinterpret my comments on this subject so far, even to the extent of apparently misunderstanding a statement that was intended to state that general covariance is manifest in general relativity. I have tried to explain my point in a variety of different ways, so am finding it increasingly difficult to understand how it can be that they are still making such strenuous objections to what are very simple facts: General covariance exists; One can always choose coordinates to consider any set of space-like hypersurfaces one likes; While well motivated, a CS gauge results in comparing the energy density of different regions when they are at different ages (when the bang time is not constant.) These are all facts.

    Most of the authors' replies on this point have been to point out why a CS gauge is well motivated. I do not disagree: It is very well motivated. However it is not the only gauge that one may consider. I am not disputing its uniqueness as defined by the flow lines of the fluid. I am only stating that it is not unique in the more general sense that I am free to consider any foliation I like without changing any observables (observables being independent of the choice of coordinates.) As I am sure the authors are aware, it is quite normal to consider non-CS coordinates in cosmology. I will not attempt to correct their misinterpretation of my comments on this subject individually, as this would take some time, and only distract from the task at hand.

    Having got that out of the way, I think the addition of the extra information to Figures 10 and 19 adds significantly to the paper. I will consider this point addressed to my satisfaction if the authors add a paragraph of the type I have written below to the introduction. (This intended as a suggestion of what should be communicated only, I am not trying to write their paper for them, although they are welcome to use these words if they like).

    ``As is usual in the study of L-T models, we choose to use a comoving and synchronous coordinate system for the majority of this work. Such a coordinate system is uniquely defined by the flow lines of the fluid and allows the line-element to be written in a simple form. However, it is of course the case that quantities such as energy density profiles on space-like volumes are sensitive to the choice of hypersurface on which they are recorded. To illustrate this dependence, and the effect of considering other well motivated foliations, we also present our final results on a set of hypersurfaces in which each fluid element is the same time-like distance from the initial singularity (along the world-lines of the dust particles). Such a choice allows us to consider the energy density of different regions when they are at the same age, and becomes a comoving and synchronous coordinate system in the constant bang time models where giant voids have often been inferred.''

    No doubt the authors will object to what I have written, but I expect they understand what it is I am trying to communicate. Nothing above is in any ``contradiction to the basic principles of relativity theory'', and is not ``basically incorrect'' in any way.

    When they introduce Figures 10 and 19 in the text I suggest they should also mention that these plots contain information on the density contrast in the new gauge. If I were them, I would also want comment that the giant hump interpretation is most apparent in the comoving and synchronous gauge (or however it appears to them).

    The authors' have written three points aimed at me in their last reply. I will answer these now.

    1. The ``hump'' takes a different form in the EAF gauge; the form of density profile is obviously dependent on the choice of gauge. Part of the reason I've been asking them to do this is to see if the foliation dependence makes a significant difference to the interpretation of a ``hump''. It seems it makes some difference, but that over large enough distance scales an overdensity is still apparent. This is useful information for the reader.

    2. I don't think they need to explain their choice of coordinates any further. If they are concerned about the interpretation of $R(t_0,r)$ in Figures 10 and 19 then they could always use ``r'' rather than ``R'' on the x-axis (although it appears clear enough to me what is intended). It is still perfectly acceptable to talk about spatial distance in a hypersurface that is not orthogonal to the flow lines of a fluid.

    3. It also seems clear enough what $\rho_0$ is, and what the different lines on their plots represent. I see no need for further explanation. What they are proposing in this point would seem to complicate things more than clarify them.

    If the authors add a paragraph communicating what I have tried to express in my example paragraph above, and leave the new data in their plots, then I will consider this point to be addressed to my satisfaction.

    2. On our special position in the Universe
    -----------------------------------------------------

    The paragraph in question does not make any sense and should be removed.

    That the L-T solution is an approximation to the true geometry of the Universe does not justify their ascertation that cosmological principle is not contradicted. Neither does the possibility that there are other Gpc sized structures in the Universe. The cosmological principle deals with typicality of observers, not just non-uniqueness.

    3. On the ``Hubble bubble''
    --------------------------------------------------

    The authors say they are ready to remove the section that I find inappropriate. I recommend removing the entire section that speculates on the motives of others, starting from ``Why then has this information been overlooked?''.

    4. On whether the ``giant hump'' is apparent.
    -----------------------------------------------------------

    My point on this is not that they shouldn't use the word ``hump'' to describe the density profile that they find in the CS gauge; that seems like a good word to describe the shape. What I am suggesting is that they add a short comment on what currently existing data implies. A ``hint'' of how to do this is to add a comment of the type below (again, this is meant as an example of what I would like to see communicated, not necessarily exactly what to write).

    ``It is clear that in the energy density profiles shown in Figures 10 and 19 that the L-T models we have reconstructed have a large overdensity, or ``giant hump'', when viewed over large enough scales. However, as we have made clear throughout, these profiles are the result of reproducing observables that match $\Lambda$CDM predictions, and not from fitting to any real data. Luminosity distances from supernovae observations, and number counts from galaxy surveys do not currently extend much beyond $z=1$ and $z=0.1$, respectively. As such, using real observables, it would only currently be possible to perform a reconstruction of a limited part of the full structure we have found here, and even then only up the degree allowed by the errors associated with these quantities. If one were to attempt such a reconstruction, it appears from Figures 10 and 19 that one may, in fact, reconstruct a local energy density profile that is an increasing function of $r$, and not decreasing, due to the limited extent of these observations in $z$ (this is particularly true in the equal age foliation). Our interpretation of a ``giant hump'' should therefore be understood as corresponding to an extrapolation of observable quantities under the expectation that they will follow $\Lambda$CDM, rather than being directly implied by any currently known observations themselves.''\\

    To reply to the disputed point on ``cosmic average density'': As far as I am aware observers do not directly measure any quantity that could be considered a cosmic average density (in the sense of an average density of the observable universe). Cosmological probes, such as the CMB, require a model, which is almost always taken to be FRW.

    5. On ``hypothesis testing''
    --------------------------------------

    We seem to agree on this point. As a ``hint'', I recommend rewording the existing passage so that it communicates something similar to the passage below.

    ``When considering models that go beyond the FLRW approximation, one may ask either `what limitations on the arbitrary functions in the models do our observations impose?', or `which model best describes a given situation: a homogeneous FLRW model, or an inhomogeneous one?' The latter of these questions has often been asked in the context of comparing L-T models without $\Lambda$ to FLRW models with $\Lambda$, and is of much interest for understanding the necessity of introducing $\Lambda$ into the observer's standard cosmological model. Such hypothesis testing questions are often posed in cosmology, but are difficult to address in the current context as they require artificially limiting the generality of the models in question (in order to have a finite number of parameters, so that the test can be performed). Given the lack of motivation for exactly how to perform such a limitation, one is then left in the undesirable circumstance of (potentially) dismissing particular L-T models, while being left with an infinite number of remaining L-T models to evaluate. We therefore prefer to consider the former question. In order to reasonably answer this for the L-T model, a general framework for interpreting observations'''

    Once again, this is only intended as a suggestion of what I would like to see communicated, not an exact prescription.


    This time, the requests of the referee were very nearly acceptable. We deleted the two segments he did not like, we used his last two suggested insertions verbatim, and we modified his first suggested insertion just a little. In this form, the paper was at last accepted for publication on May 3, 2010.