Here is a webpage that discusses the two types of studies, along with their usual limitations, differences, and other important points. The main difference that is the starting point for this post is that a “prospective” study starts now and ends at some point in the future; while a “retrospective” study starts now and looks back to the past.
Why is that important? Many reasons, including that it is usually if not always difficult to reconcile the statistics produced by a study of one type with a study of another type.
Here is an example of what a retrospective study looks like — a person looks at babies who were born last year, and counts up mortality and morbidity, based on whether the baby was born by vaginal birth or C-section. A prospective study looks more like this — a person finds women who are currently pregnant, and follows them until they give birth, and sees how many of them had a C-section and how many gave birth vaginally, and what the mortality and morbidity of each group was.
In some ways, they may look similar, but the difference is important, when you compare and contrast the Johnson & Daviss Certified Professional Midwife study which was published in the BMJ, and the 2003-2004 CDC statistics. (I’ve written about these from a different angle here.) The BMJ study was prospective — that is, it took a group of then-pregnant women and followed them through the end of pregnancy and six weeks after birth. The CDC stats are retrospective — that is, it shows (among other things) where the baby was born, whether vaginally or by C-section, and who was the birth attendant; but it does not say anything about the plans or intentions of the mother.
Since the CDC stats are merely retrospective — showing what happened, not what was intended, there were doubtless many planned home births that ended up as hospital transfers. Most studies, as well as most midwives’ own records, indicate about a 10% hospital transfer rate; very few of those are emergencies, with most being due to women deciding they want or need medical pain relief, or there is a desire or need for labor induction or augmentation. So, we can figure that if the CDC statistics are correct (which I actually doubt, since there are some major discrepancies, including in-hospital births attended by “other midwife”, several “out of hospital” C-sections, and many thousand C-sections performed by CNMs), then they reflect only about 90% of planned home births. (Whether that has any bearing on mortality rates is debatable, especially since these statistics are just that — statistics — not a study which shows whether any apparent difference is actually statistically significant.) However, there were over 35,000 births attended by “other midwife” in 2003-2004, but if this is only 90% of the true number, then there would have been about 40,000 planned midwife-attended home births (and probably some of the out-of-hospital births not attended by midwives were planned home births, but the midwife couldn’t make it in time, either because of precipitous birth, or [like me] the mother just didn’t notify the midwife in time.) Had these women all participated in a prospective study, we would know for sure what their plans were, and what the outcome was. Since, however, it is retrospective, we don’t.
Another thing this data lacks is whether these “other midwives” were CPMs or not. This is one reason why the BMJ study looked only at CPMs, and not non-certified midwives — CPMs had a standardized curriculum and training, whereas non-certified midwives maybe did, and maybe didn’t. Since my state is unregulated, I could be a midwife, although as close as I’ve come to midwifery school is reading Ina May’s Guide to Childbirth, and a few other pregnancy or birth books that are also included in some midwifery curricula. But I shouldn’t be lumped in together with well-educated and well-trained midwives. It’s simply not fair.
While the BMJ study was, indeed, prospective, tracking women who intended to give birth, listing the outcomes, if known (miscarriage, stillbirth, transferred for pregnancy complications, etc.), of the women who were initially registered, it is not a valid comparison just to say, “Hey, let’s look at these CDC stats over here, and find women of these characteristics, and say they match.” Only a similar prospective group, cared for by doctors, would be a valid match. Let me give you some examples to clarify this.
Let’s design on paper a study to compare outcomes of women who plan on giving birth at home with a CPM with those of women planning a hospital birth with an OB. First, we can’t just pick the same number of women, because there are some reasons that women would be excluded from home-birth from the start, which would not be a valid comparison to just randomly selecting the same number of women from an OB’s client roster. The women have to be matched in terms of health, because it’s not fair to have only healthy, low-risk women in the CPM group, and a mixed population in the OB group. They should also be matched as closely as possible in as many areas as possible, and here are the ones that first spring to my mind: age, race, parity (how many births they’ve already had), marital status, and smoking. Since there is a recognized difference in neonatal death rates for each of these, plus since smoking obviously increases the risk of complications including preterm birth and stillbirth, it wouldn’t be fair to have one study group be 95% married non-smokers while the other group was only 75%.
Just in case you didn’t read some of the additional discussion that the BMJ authors have available, the reason they did not get a prospective OB group like what I’m talking about here is that it was just too expensive for the grant they were given for the study.
Moving on… We now have our two matched groups — the CPM group of 5000 women matched to the OB group of 5000. (Many times when they match groups, they will have more in one group than the other; one group is probably self-limited in size, such as women who choose home birth, but the other has a much larger pool from which to choose, so having more women reduces the likelihood of differences being due to chance.) Starting in the first pregnancy, we track all outcomes, to see if there are any differences in mortality, morbidity, pregnancy outcome, etc., especially if these are statistically significant.
For instance, the rate of pre-eclampsia is anywhere from 2-5% of pregnancies. Many midwives suggest to their clients that they follow The Brewer Diet, as a means of reducing this risk. Were this study to find a significant difference in favor of midwifery care, that would be a tremendous boon to women and their babies. If, however, the difference were not significant, or if it were even in favor of obstetric care, then many midwives would have to rethink their current practice. Same for other health events, like gestational diabetes, because many women may start a pregnancy low-risk, and then be moved into a higher-risk category for previously unknown underlying health problems. One of my friends was like that — in her last pregnancy she ended up needing to be on heart medication, although she had never had any heart problems, nor any indication of problems, prior to her second pregnancy.
I’m not saying that there is some magic involved in midwifery care; I’m just saying it’s difficult to try to compare midwifery clients to obstetricians’ clients, unless it is done prospectively. Obviously, women who end up with a home birth attended by a midwife did not develop eclampsia, so merely looking at women retrospectively and noting a low or non-existent rate of PE among home-birthers means nothing. But, if, with the same or similar client profile, fewer midwifery clients have adverse health events (like developing preeclampsia or gestational diabetes), or there were fewer miscarriages or stillbirths, then the care that midwives give to their clients merits a closer look. Without such a study, we’ll never really know.
Going back to the problem of taking retrospective statistics and trying to apply them to a prospective study, I notice this potential discrepancy — and it is one in which the answer can only be speculated about — looking at, for example, the women in the BMJ study, we can see how many of them had miscarriages, stillbirths, intrapartum deaths, pregnancy complications necessitating referral to an OB, and neonatal deaths, because the researchers noted all of these things. Looking at government-collected statistics, however, we see only how many neonatal deaths there were, based on various criteria — race, maternal age, parity, gestational age at birth, etc. So, we can see, for example, how many babies died that were born to white women at 37 gestational weeks and up. What we cannot see is how many low-risk women (who would have qualified to have midwifery care and be in the prospective group) ended up giving birth pre-term. If women who have midwifery care have a lower rate of pre-term births than an equivalent group under obstetric care, then that is an important finding.
But we don’t know that, because these types of studies are expensive; and obstetricians’ clients contain women of all risk groups. To take a group of CNM-attended births (such as what you can do with the CDC stats), and say that it’s a like comparison is equally not valid; because although CNMs only attend low-risk births, the different state laws may require “risking out” women of varying criteria (which is another problem in CPM-attended births — different criteria, or in the case of non-certified midwives, perhaps even none at all). So you may still end up with a “mixed bag” — that of only ultra-low-risk births attended by CNMs with a possible mixture of low-risk, medium-risk, and even high-risk births (including twins, breeches, and women who absolutely refuse to go to the hospital) among CPMs and non-certified midwives.