If you were an allergy researcher ...

Started by LinksEtc, October 11, 2013, 08:16:10 AM

Previous topic - Next topic

guess

It'll need some tweaks, later.

Minimal sample for national is 2,000 responses on survey. Survey will seek to measure loss of FAPE due to lack of enforcement. That's pure quants.

Then collect failed OCR investigations, resolutions to examine for qualitative. It should be higher but I know I can't personally manage more than 25 myself so that will require volunteers. The qualitative will get source a secondary quantitative function by using a set of identified criteria to index occurrences of whatever it is to be indexed. I'd guess something like paperwork that manufactured after the fact, retaliation, bias. The suggestion made to me is cross-validation by using multiple volunteers performing the rating tasks to produce an interrater reliability keeping a high alpha.

I will take care of the requirements and design. I'll need help in seeking responses the higher the number the better the sample. It'll be full of different types of bias but we're not worried about publishing or issuing advice based on it. I'm only looking at measuring loss on very specific factors.

ajasfolks2

Is this where I blame iPhone and cuss like an old fighter pilot's wife?

**(&%@@&%$^%$#^%$#$*&      LOL!!   


LinksEtc

Tweeted by @skepticpedi

"The Odds, Continually Updated"
http://mobile.nytimes.com/2014/09/30/science/the-odds-continually-updated.html?emc=eta1&_r=0&referrer=

QuoteSome statisticians and scientists are optimistic that Bayesian methods can improve the reliability of research by allowing scientists to crosscheck work done with the more traditional or "classical" approach, known as frequentist statistics. The two methods approach the same problems from different angles.



LinksEtc


LinksEtc

Tweeted by @michaelseid11

"Why Do Good Ideas Fail? This Diagram Explains"
http://www.forbes.com/sites/victorhwang/2014/10/01/why-do-good-ideas-fail-this-diagram-explains/

QuoteThe battle might be described as rigor versus intuition.
QuoteGood ideas fail because they cannot cross the cultural barrier between innovation and production.

LinksEtc

Re: Choosing a 2014-2015 flu vaccine

Quote from: CMdeux on October 09, 2014, 11:26:19 AM
That is what led me to perform this research.


With all due respect, reading a few things on the internet is NOT research.  Why not?

Well, because there is no way to refute the hypothesis that one is formulating with all of that reading, and one may quite easily ignore or discredit-- or perhaps simply never FIND-- material that doesn't support our presuppositions.  Genuine research involves being willing to TEST whether or not a hypothesis is plausible by allowing for conditions* in which the hypothesis would be proven incorrect. 

The problem with doing this kind of "research" one's self is that selection and perception biases are huge to begin with unless one has already had the kind of training that generally comes along with a terminal degree in a physical science or in medicine, and made even worse by the fact that we as parents are deeply emotionally invested and come to the process with what we WANT to believe must be so (that there must be a "reason" for "X" to have happened to us/our child). 

* Suppose that I believe that, just for example, lunar eclipses are caused by unseasonable temperatures.  How would my doing a lot of "internet research" allow me to DISprove such a hypothesis?  It probably wouldn't-- because think about how I would go about searching that hypothesis and supporting materials out to begin with-- I'd be LOOKING for evidence that supported my hypothesis.  Also, "unseasonable" is a pretty relative term.  The mechanism is plausibly connected, at least if I didn't know a lot about climate and astronomical observations, so I might not really see any NEED to hunt down material that directly contradicts my personal beliefs in any way. 

This is why scientists don't necessarily have much respect for laypersons doing "research" by the way-- it's not that we think that people are dumb, exactly, so much as that they consistently overestimate their own objectivity and metacognition, and fail to appreciate that a willingness to be catastrophically WRONG-WRONG-WRONG is part of the process.  An essential part of the process.

CMdeux

Here's what laypersons don't necessarily understand about how researchers operate.  And by this-- please understand that this is a permanent kind of condition or lifestyle for people like me.

We analyze EVERYTHING-- all the time-- and we pay attention to EVERYTHING, all the time-- and we don't necessarily put it into "I accept this as true" versus "I reject this as false" categories as rapidly as I think others do.

Oh, sure-- we truth-test everything continuously, and there is a lot more 'rejecting' than accepting happening.  For example, I don't believe that homeopathy is based in reality, and I therefore reject it as "science" because it violates several scientific principles that are pretty well established as theories and in some cases even as natural laws.

Water doesn't have "memory" to speak of, and I know that this is so as well, given that I understand an awful lot about how atoms behave with one another, and how those atoms in particular behave.  I've seen the evidence of that, and it's been studied in a lot of different ways, by a lot of different people.  That means that homeopathy's basis is inherently impossible.  Ergo, the conclusions that rest on that supposition are also flawed-- or at least only coincidentally so.  As a scientist, that amounts to the exact same thing-- because "sh*t happens" isn't an explanation of WHY it happens.

So.

Scientists are the kind of professional skeptics that a lot of other people find intensely irritating, because we are often critical or pervasively suspicious, no matter how excellent our social skills. 

Anyway-- my point is that we read VERY WIDELY in areas that we are interested in, and sometimes we just read everything-- regardless of basic interest level.  I read a volume that most people find eye-watering, and I sort of collect information.  I'm a hoarder, intellectually speaking.

Which is why I know some things about Richet that didn't make it into his 1913 Nobel speech, btw.   ;)

This is also why when I read parts of a journal article (or press release), I can have little niggling things bug me about the authors' background information, if it seems to be, um-- cherry-picked in some obvious ways.  Or if it ignores something that seems like an elephant in the room to me, having done background reading in the field. 

As a person I find reading news stories about scientific or medical breakthroughs pretty exciting.  As a scientist, my enthusiasm is usually tempered by a healthy streak of "extraordinary claims require extraordinary evidence," and if I'm interested or skeptical enough, I dig further until I satisfy that curiosity in a meaningful way.  Often that means looking up a LOT of vocabulary, previous research studies, surfing PubMed, etc.  If I'm not that curious, I sort of file it away for future reference if I someday become more interested in something related.




I think that we are on the verge of entering a new diagnostic era in clinical practice-- one that depends on not just symptom labeling, but of labeling actual mechanism.  It is now becoming clearer, for example, that "asthma" patients, as a group-- are not a single group, and this is why therapies for that condition have problematically worked for some patients and not for others.  The therapies intervene in particular, defined mechanistic pathways.  If those mechanisms are only true for some patients, then intervening will only be effective for some of them, as well.

There was a time when "diabetes" was viewed as a single condition, as well.  Now we know that it's at least two, and probably more like four when one teases apart environmental/metabolic/lifestyle triggers from those which are primarily genetically predetermined. 

I suspect quite strongly that food allergy and atopy is eventually going to be the same way-- there is almost certainly a group of patients for whom epigenetic factors are critical in disease mechanism.  But there is also a group for whom genes are almost a destiny, too. 

Thus clinical practice is becoming more science-based, in some very significant ways, as mechanistic understanding improves.  This means two very important things, though:  1.  patients have to be VERY careful extrapolating from literature which is older, since it may/may not even refer to the same patient group on a molecular/mechanistic plane, and 2. at the moment, you may simply have no way to know whether or not a particular study applies to you personally as a patient.  This is why I pay VERY particular attention to inclusion/exclusion criteria for clinical studies of any kind.  If particular patient subsets are over- or under-represented (by ethnicity, gender, age, or clinical history features) then any conclusions cannot properly be applied to all patients in that larger group.

This is a known problem in clinical research, btw-- female gender and pediatrics are generally very difficult confounding factors in test subjects for medical or pharmaceutical research, and many researchers avoid including them for that reason.   It's not that they are ignoring the plight of women and children, so much as that they are studying a treatment group that is more likely to result in significant results that can be published-- the "noise" generated in female and pediatric populations makes it less likely that a robust result will be obtained.   

Anyway-- my long-winded meandering way of saying that being a research-minded individual means knowing a LOT of very deep background and some of the unspoken/unwritten backdrop against which researchers and clinicians are operating at any given time.  That sometimes includes prevailing "common wisdom" about conditions under study.  For a long time in the 20th century, asthma was seen by many laypersons as being more or less a psychiatric condition, believe it or not.    Understanding that allows me to read a paper written in 1965 in a very different light. 





In summary:  being a scientist means simultaneously understanding that it's all connected, all right... but it's DEFINITELY not a conspiracy, nor is 'connected' synonymous in any way with "causative."  A lot of what we know (or have known throughout human history) is simply, profoundly incorrect in an objective sense.  Take our reluctance to (as a species) accept heliocentrism as a theory as an instructional example there-- and to reject an earth-centered view of reality as we know it.   We hang on to what we THINK is true, sometimes even in the face of evidence that it can't be so.   


Resistance isn't futile.  It's voltage divided by current. 


Western U.S.

LinksEtc

CM, feel free to lock this thread if you feel that maybe it is pushing things too far, if you are worried about it leading people astray.   :heart:

---------------------------------


Tweeted by @Aller_MD

"How the Nobel Prize helps us find out where we are, in every sense"
http://blogs.biomedcentral.com/bmcblog/2014/10/09/how-the-nobel-prize-helps-us-find-out-where-we-are-in-every-sense/?utm_content=bufferf28f9&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

QuoteAt the beginning most people were quite skeptical at the idea that you could go deep inside the brain and find things which corresponded to aspects of the environment...I think it's taken a while, but there were some people early on who accepted it and of course I'm grateful and of course now the field has blossomed.
QuoteBeyond the physiology and medicine prize, a lot of commentators also highlighted that the categories for the scientific fields didn't reflect that some important findings don't fit neatly into the category of one science or the other.




LinksEtc

Re: Choosing a 2014-2015 flu vaccine


Quote from: Linden on October 10, 2014, 12:34:25 PM
Unfortunately, you cannot do DIY research any more than you can do DIY brain surgery.  You need doctoral training, just as CMDeux said.  If I may elaborate on her points further:

1) You need a Ph.D.  in a related subject matter.  Otherwise you won't have the content knowledge to be able to fully understand the logic behind the hypothesis being tested or to assess whether the conclusions drawn by the author are supported by the study. 

2) You need methodological training.  Buckets of it.  You need to be trained on experimental methods and other research methodology so that you can assess whether the study's methodology is adequate to truly test what the authors claim to test.  And so that you can understand the *limitations* on the findings.  Many studies suffer from limited or poor design.

3) You need access to a Ph.D.level statistician.  Preferably one with training in the statistical methods being employed by the study.   I can't even tell you how frequently statisticians read published studies and shake their heads.  They find MAJOR flaws in the statistical methods used.  They use words like "all wrong" and "you can't do that with the data". 

Okay, so what can we laypeople do? Is there a role for us?  I think there is.  We can raise concerns, and we can ask questions.  We can wonder whether a particular line of research is suggesting something. We can talk to Congress and to NIH about what we think the research priorities should be.  And we can post on the "if you were an allergy researcher" thread.


APV

#145
Quote from: LinksEtc on October 10, 2014, 02:58:33 PM
Re: Choosing a 2014-2015 flu vaccine


Quote from: Linden on October 10, 2014, 12:34:25 PM
Unfortunately, you cannot do DIY research any more than you can do DIY brain surgery.  You need doctoral training, just as CMDeux said.  If I may elaborate on her points further:

1) You need a Ph.D.  in a related subject matter.  Otherwise you won't have the content knowledge to be able to fully understand the logic behind the hypothesis being tested or to assess whether the conclusions drawn by the author are supported by the study. 

2) You need methodological training.  Buckets of it.  You need to be trained on experimental methods and other research methodology so that you can assess whether the study's methodology is adequate to truly test what the authors claim to test.  And so that you can understand the *limitations* on the findings.  Many studies suffer from limited or poor design.

3) You need access to a Ph.D.level statistician.  Preferably one with training in the statistical methods being employed by the study.   I can't even tell you how frequently statisticians read published studies and shake their heads.  They find MAJOR flaws in the statistical methods used.  They use words like "all wrong" and "you can't do that with the data". 

Okay, so what can we laypeople do? Is there a role for us?  I think there is.  We can raise concerns, and we can ask questions.  We can wonder whether a particular line of research is suggesting something. We can talk to Congress and to NIH about what we think the research priorities should be.  And we can post on the "if you were an allergy researcher" thread.

FWIW,
I wrote a similar post to the immunology list at the National Institute of Allergy and Infectious Diseases/National Institutes of Health.
Here's the response I got from Dr. Matzinger:
https://list.nih.gov/cgi-bin/wa.exe?A2=ind1305&L=immuni-l&F=&S=&P=37286
There was no other response.

Dr. Calman Prussin also of the NIAID/NIH wrote in an email to me:
"Could parenteral exposure to proteins in vaccines cause allergy? Sure. Of course it is possible. Parenteral injection of many proteins can be done in such a way as to induce IgE."

Vaccine engineering or tinkering?

I am an engineer. Before I design a product, I write a specification. Apparently, the FDA does not. And neither do the pharmaceutical companies.
The FDA wrote to me:
"There is not, as you describe it, an FDA determined safe amount of a potentially allergenic ingredient contained in a vaccine. The FDA reviews vaccine composition in its entirety to ensure the safety and efficacy of the vaccine."

Sanofi Pasteur wrote to me:
"There is no specification for residual egg protein (expressed as ovalbumin) for influenza vaccines in the United States, nor is testing of the final product required for ovalbumin content."

In other words, anything goes! The FDA and the vaccine makers do not seem to be engineering a product, they are tinkering with it. And they are tinkering with our lives.


Survival guide until they make vaccines safer

1. Avoid vaccines when possible. Example, life style changes to avoid HPV vaccine.
AVOIDING OTHER VACCINES MAY BE DANGEROUS.
2. Find a vaccine that has the least number and amount of undesirable proteins and toxins.
3. No more than one vaccine a month. Allow the body to deal with one set of allergens and toxins at a time.
4. Avoid C-sections when possible. If a c-section is unavoidable, the baby may have to be artificially contaminated with the mother's germs.

Postnatal development of intestinal microflora as influenced by infant nutrition. J Nutr 138:1791S-1795S, 2008.
http://ajpregu.physiology.org/lookup/ijlink?linkType=ABST&journalCode=nutrition&resid=138/9/1791S&atom;=/ajpregu/304/12/R1065.atom

Without such contamination, the baby is more likely to be primed for vaccine induced allergies and other allergies as described here:

The Impact of Caesarian Section On the Relationship Between Inhalent Allergen Exposure and Allergen-Specific IgE At Age 2 Years. http://www.jacionline.org/article/S0091-6749(12)03130-2/fulltext
5. Demand safe vaccines from the FDA/CDC and Congress.

guess



LinksEtc

#148
Tweeted by @PCORI

"What the Research Community Can Learn from Patient and Stakeholder Engagement"
http://www.pcori.org/blog/what-research-community-can-learn-patient-and-stakeholder-engagement?utm_content=buffer4d0c7&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

QuoteThe inherent biases of investigators and funders color what they perceive as key questions, research priorities, and measures of success. Patients and stakeholders provide a check on those biases and move the research agenda toward questions and evaluation metrics that matter most to them.


&


"What We Mean by Engagement"
http://www.pcori.org/content/what-we-mean-engagement?utm_content=buffer88fa9&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

QuoteBy "engagement in research," we refer to the meaningful involvement of patients, caregivers, clinicians, and other healthcare stakeholders throughout the research process—from topic selection through design and conduct of research to dissemination of results.





LinksEtc

Tweeted by @HeartSisters

View From Nowhere
On the cultural ideology of Big Data

http://thenewinquiry.com/essays/view-from-nowhere/

Quote"What science becomes in any historical era depends on what we make of it"  —Sandra Harding, Whose Science? Whose Knowledge? (1991)

QuoteBig Data can be used to give any chosen hypothesis a veneer of science and the unearned authority of numbers. The data is big enough to entertain any story.



Quick Reply

Warning: this topic has not been posted in for at least 365 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:
Type the letters shown in the picture
Listen to the letters / Request another image

Type the letters shown in the picture:
Three blonde, blue-eyed siblings are named Suzy, Jack and Bill.  What color hair does the sister have?:
Spell the answer to 6 + 7 =:
Shortcuts: ALT+S post or ALT+P preview