Honestly, I find this stuff interesting as well, Links-- but it raises real questions in my own mind about what credentialing
actually does-- as opposed to 'merely' acting as gatekeeping...
the problem is that without the background to determine whether or not ideas (whether our own or those that we find appealing when promoted by others) are even
plausible, we can get into the weeds pretty quickly.
It takes a VERY intelligent and
secure clinician/researcher to actually explain why something is wrong/impossible/implausible, however, and that is time that (mostly) is better spent on doing research or doctoring-- because there are just not enough of those people in the world.
Unfortunately, that leaves a gap which is easily filled in the modern era-- filled with pseudoscience and fear-mongering, unfortunately.
I'm definitely not saying that any member of this particular community is in any way PRONE to those things-- we aren't, and it's one of our particular hallmarks that we kick the tires and truth-test things while partnering wholeheartedly with allopathic physicians that we trust-- it's just that there IS a difference between what one determined patient or parent
can realistically obtain a deep and genuine understanding of in a short amount of time. I definitely do NOT have the understanding of atopy that our allergist does-- I have
other expertise to bring to bear, but at some point, I have to listen and just
trust him when he tells me that I'm off in the weeds.
Otherwise this leads to places like chasing a vaccine cause for allergies (or anything else) which is a dead end (and has been a dead end for many decades). The problem is that proving something CANNOT be true is way, way harder than proving that it's not
always true-- but that's science for us, in a nutshell. A single counter-example is enough to disprove a hypothesis-- or is it? More often, what is
actually going on is that one of our initial underlying assumptions wasn't correct to begin with.
For an example of what I mean by that, check out the following example.
1. I believe that blue eyes are an autosomal recessive trait. Therefore, a parent with blue eyes can only contribute blue-eyed genes to offspring.
2. Two blue-eyed parents may only have blue-eyed offspring.
The scientific method would then CHECK that hypothesis with sampling and observations--
3. I have found several families in which children of blue-eyed parents have dark brown eyes.
Now-- at this point, one might do two things. One might say "clearly that hypothesis is incorrect, as is the belief that engendered it. Blue eyes are not an autosomal recessive trait. Eye color may be an unstable genetic trait-- or perhaps it's not genetic at all, but is an environmental or epigenetic trait."
Would that be a correct interpretation?
NO.
The correct thing to do is actually to go back and examine my experimental design more closely--
1. what assumptions did I make when sampling?
- that all offspring are the result of DNA contributed from their parents
- I neglected to account for spontaneous mutation
- I didn't verify that offspring are BIOLOGICALLY the offspring of their parents-- maybe there are adoptees in my sample
2. AHA! That's it-- when I eliminated offspring from my original sample via determining which children were not biologically the offspring of their parents, my hypothesis was suddenly completely CORRECT. Was it okay to eliminate that portion of the sample, though....
Well, of course it was. Because my hypothesis, at its heart, was about the genetic material inherited from biological parents.
Now, there's a bit of a problem here-- a bit of a Goldilocks effect, if you will.
The problem is that if I'm
too invested in my experimental design, I won't see its flaws. This is why scientists and researchers rely upon colleagues, collaborators, and peer review to critique their work-- CONTINUOUSLY. And in spite of what most laypersons believe, this process is fairly ruthless and impersonal-- it
has to be to work. I don't value a colleague that never gives me USEFUL feedback that I can use to improve my work. That's not the purpose of such criticism.
The other problem, though, is that if a person was simply not well-versed enough in basic genetics and biology, this all seems rather magical to begin with-- and THAT kind of person may send a researcher scrambling down a lot of rabbit holes or at least explaining that no, autosomal recessive inheritance works like
so, see, so that question isn't relevant in the first place, because of how DNA replication works... this is the kind of person who won't see that excluding non-biologically related families was a valid criterion, whereas excluding on the basis of, say, geographic location-- would not be.
This is why lay review is often
so frustrating. Picking up the vocabulary isn't always enough to provide a true framework of understanding (assuming that a consensus exists). There's no basis for evaluating the basic experimental design to tease apart those research articles which are bad/flawed, versus those that are good or well-considered and thorough. That's critical, because the validity of the conclusions rests upon that distinction-- it's not how WIDELY READ a paper is, nor how popular with the press.
So while this stuff is important, it's also important (IMO) for us as patients to know when the answer is "I don't know" versus "Nobody (yet) knows," versus "the answer isn't at all clear at the moment, but here are some ideas..."