otherhealth.com  

Go Back   otherhealth.com > Homeopathy > Research and the Scientific Validity of Homeopathy

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #91 (permalink)  
Old 17th January 2005, 09:44 AM
Member
 
Join Date: Oct 2004
Location: germany
Posts: 98
Carn is on a distinguished road
Default

Quote:
Originally Posted by bwv11

[/indent][/color]sooo, the problem is that research scientists are unable to design an adequate protocol because they don't understand the homeopathic paradigm; aaand, that homeopaths can't design an adequate protocol because they are lousy researchers.
But if research scientist are all unable to understand homeopathy, then they can never design adequate protocols. That means a part of homeopaths has to learn how to conduct research(in the science sense), otherwise, there can be never evidence for or against homeopathy.

And what does go wrong with the general DBPC idea of giving have the patients of a homeopath a placebo and the other half what the homeopath prescribed. After a year the homeopath ranks the patients according to the improvement they recieved. The remedy ones should score far better, if there is a difference between remedy. What problems with that except ethical ones?

And if you argue, that homeopath knowing half his patients do not get a remedy, is then hindered in estimating and treating all, then what about a test where even the homeopath does not know a test is running?
(Clearly this would be difficult to arrange and nearly impossible to do without braking laws, but please ignore those problems for the moment)

Quote:
Originally Posted by bwv11
its a tough crunch, but it's the hand we've been dealt, as i see it. as my argument is primarily with youse guys, that has been and remains my focus: you underestimate how difficult it is to cover the variety of variables one encounters in clinical practice, not just homeopathy, but psychotherapy, which is the field in which i first developed strong anti-research sentiments. it is not that homeopathy is somehow, wierdly 'immaterial,' but that its material base is far more dynamic and multi-dimensional than the allopathic model. when a statistician asks, 'what remedy will cure irritable bowel syndrome - or high blood pressure - or any other (allopathic) disease state, the answer is: none of them, if wrongly prescribed, and a great many of them, if correctly prescribed to the patient instead of to the disease.
Therefore sceptics try to find situations where it is simpler, but i have the feeling they do not receive any help from homeopaths and are even hindered by them with the "you do not know anything about homeopathy, so its not worth to discuss with you".

Example for that search in the current thread at Randiland, a sceptic suggest testing the ability of arnice to help healing scratches. He does suggest this, since very often arnice is said to have that ability. I even know a kindergarten teacher, who gives small childs half a dozen arnice c200 pills, when they injure themselves painfully, assuming it helps the healing process and reduces the pain.
No either this is complete nonsense from a homeopathic pov, but where are then the homeopaths arguing publically against such practice or it is mostly correct, but then a dbpc is easy and straightforward, with no excuse in case of success or failure.

Another example is my thread about animal test ("New playground"), with animals several problems of test with humans can be avoided, but apparently that idea is not interesting.
Any idea why?

Quote:
Originally Posted by bwv11
your example of breaking down statistics in terms of degrees of pain reduction and the like, is still enormously simplistic in relation to the problems posed in assessing treatment progress in vivo as well as retrospectively. even with a good understanding of homeopathy, the factors to compensate for are quite evasive. my expectation would be, still, that in some respects - possibly lab tests - end results should corroborate the improvement, regardless how labyrinthine the path taken.
I do not argue, that it is not simple, i only tried to show that even statisticans are not satisfied with "yes/no". I hope this gives you an idea, that sceptics do get feed up by constant complains from you about statisticans, who press everything in "yes/no". This is not understood in the same way as "arguing from this to that".
We can also deal with different answers than "yes/no" and can count from this to that.
The only thing where a "yes/no" answer is the only satisfying result is "is a sub-avagadro homeopathic remedy different from carrier substance?". Anything else but yes or no does not make sense with that question.

Quote:
Originally Posted by bwv11
Regarding the question, is it possible to understand homeopathy without being a homeopath, the answer is, again, not a yes/no, either/or kind of thing. Hans, for example, possesses a good knowledge of the basic homeopathic literature, yet still makes significant misinterpretations of process, mistakes that, if they were presented as actually representative of homeopathic doctrine, would make me doubt homeopathy, too. In fact, Iíve certainly made similar and sometimes really stupid mistakes, frankly Ö but I am alert to the incongruities in relation to internal consistency, so try to correct them.
I do not know if you noticed, but i made a slight mistake to use non-homeopath and homeopath as equivalent of believers and disbelievers and "understanding" was meant in the sense, that he can describe hoemopathy accurately without homeopaths disregarding his opinion because he does not understand it, as for example LiseAnnan does with a guy named Steven Barrett. (//www.otherhealth.com/showthread.php?t=4198&page=2)

Though i would be interested, how she recognized, that Steve Barrett does not know enough about homeopathy. Do you know how to recognize, that someone does not know much about homeopathy?
Can a disbeliever understand homeopathy?

Carn
Reply With Quote
  #92 (permalink)  
Old 17th January 2005, 11:22 AM
Member
 
Join Date: Oct 2004
Location: germany
Posts: 98
Carn is on a distinguished road
Default

Quote:
Originally Posted by bwv11
btw, quote: '...if dbpcs are better repeatable than clinical case studies...'


i still prefer the case study because dbpc's:

1. are not perfect, either - to a far greater degree than is generally recognized;
Question is what is more imperfect.

Quote:
Originally Posted by bwv11
2. they are worse, by far, at explanation of effect in complex cases;
How do you know that explanations of case studies in complex cases are correct?
E.g. the case study at http://www.hpathy.com/casesnew/khopa...physagria1.asp concludes that the improvement was due to 5 doses of some remedy(actually only for one is named when it was given). But i fail to see the point that disproves the possiblity, that she mainly improved due to the psychological effects of finally talking about all the stuff that worried her through the years and her husband finally getting a idea what her problems are.
Hey, you know far more about that than i, read the case study and tell me, could such a improvement be intiated and enhanced solely by psychotherapy?

Quote:
Originally Posted by bwv11
3. statistical negation of causal analysis does not provide an alternative explanation, so the issue is left pretty much unresolved except in the simpler cases of relatively unmediated data, such as 'does aspirin reduce pain?'; the similar looking question, that throws skeptics way off target, is really inadequate: 'does arnica reduce pain?'
Whether something gives alternative explanation or not is only relevant for the well being of our world views, the crucial thing is correct/wrong(or any nasty state between).

Quote:
Originally Posted by bwv11
4. most important, dbpc's more closely approximate the ideal of repeatability, but repeatability is not synonymous with reliability or 'credibility' (my term, used to reference soundness of conclusions on rational grounds, as distinct from mathematical or statistical validity). if you measure an object with a yardstick that is inaccurately marked, the numbers you get will be the same every time (repeatable experiment), and, of course, will be wrongevery time.
That is the reason why independant confirmation for any result is fundamental. You explain give your yardstick to someone else and he closely looks for errors, can create his own ones and compare them and measure the stones as well. This gives some chances to recognize a wrongly marking, though there can never be a guarantee. Especially if he finds something wrong, he can create a yardstick without that mistake, remeasure all the stones and if the results differ from yours, he knows, that either of the yardstick is bad. Then you at least know you have to research the making and marking of yardsticks further, to get better ones.

But with case studies you have a method of creating a new individual markstick for every new rock and never the same rock is measured twice. If your method is flawed,then you will measure each time wrong as well. But you have the further disadvantage, that even if someone has an idea, what is wrong with your method, he could never show if his idea realy changes something, because neither he can remeasure the same stone, nor can he create a yardstick for the same stone and compare it to your yardstick for that stone. A fundamental flaw could be in there without any chance to detect it.

Quote:
Originally Posted by bwv11
to justify placebo effect, you need a useful definition of placebo, which i have yet to find enunciated in any discussions of the problem with skeptics. clinical assessment of placebo response, by comparison, introduces parameters such as specificity of response, independently produced confirming evidence (data produced by the patient in various contexts without apparent intent to substantiate claims related to the 'cure' of a symptom, for example), permanence, etc etc.
You know that is the crux, we do not know exactly how the placebo effect looks like in practice and therefore case studies can never accurately take its effect into account.

But i got a non useful definition of the placebo and verum effect:
Make 2(*) perfect copies of the universe the moment the patient has the idea to go to the doc.
In one universe you do not do anything.
In one universe change the treament, so that one component can have only a very, very little effect(e.g. replace remedies with water/sugar/alcohol or let the patient keep no diet or ...) upon patient, without patient or doc knowing about this(yep, for acupuncture its pretty hard to determine the placebo effect).
In the third you remove the component entirely with both the patient and the doc knowing it is not there, but not knowing it is a experiment.
The difference in objective and subjective health criteria between the 2 patients in the first 2 universes is the verum effect of this component, the placebo effect is the difference between the patient in the second and the third universe.

((*) Obviously if universe is non-deterministic you need to make several thousand copies for each case and statistically analyse the differences, but these differences are still verum and placebo.)

Probably you already noticed, approximation of above process is DBPC.

Quote:
Originally Posted by bwv11
in a broader context, in is somewhat fantastical to assume that the volumes - mountains - of clinical evidence produced over 2 centuries, on every continent in the world, are all the results of a shared delusion induced by superior bedside manner. mass delusions, or what-have-you, are certainly possibilities, but to induce such a generalized response to the same stimuli in such dramatically variable cultural and environmental contexts as are represented by people from all over the world, new york to itchitanga-somewhere, and blah-blah-someplaceelse, stretches credulity. cultural artifacts are not ordinarily so liberally exported, or imported.
As i said above there is a chance, that homeopathic case studies mostly make the same mistake and nobody realized it, due to lack of repeatability and comparability.

And delusions can cross cultural borders, e.g. Ufos are now reported world wide, Astrology has long since spanned the world, acupuncture, feng shui and i don't know what asian stuff is running through europe and US(i do not know its all delusions, but certainly some is delusion). I can start a thread about that on Randiland, people will know there more examples, if you need. I guess the suprise would be more if some delusions cannot cross cultural borders, the only difference i expect to be the speed.

And the point about superior bedside manner is actually an advantage in spreading homeopathy, if it is a delusion: Homeopaths are realy nice and you feel well, when being treated by them, that is a definite marketing advantage over the nasty conventional med, who does not even look at you, when asking his students, whether they think your leg has to be amputated.

Quote:
Originally Posted by bwv11
and, as intellectually fuzzy as my spiritually inclined colleagues can be, much to my chagrine, i have to tell you that in the precision and detail of their observations, many of them are simply quite outstanding. the edifice that hahnemann built is monumental; there are reasons why we still honor his name, and the names of darwin and freud, after all, and even einstein, who was no lover of statistics either, after all, though of course no patzer at the game, either

Do you mean observations regarding the health of a patient?
How did you notice this?

Also good observation alone helps little, if the reasoning afterwards gets screwed, both have to be good.

About Einstein, there is something called Bose-Einstein-statistics, Einstein did use statistics, when it was needed and i'm very sceptical, that he would have abided using statistics, if he would have ever tried to find out the truth about homeopathy, as medicine is one of the things that screams for statistics, even when looking at very basic things and therefore especially when looking at complicated things there.

Quote:
Originally Posted by bwv11
after all, as you might imagine, i'm quite happy to argue on the basis of authority, as you might argue on the basis of current fashion in science: it is i assure you quite comforting, facing the unknown, to be able to say, 'my daddy (well, you know - einstein) is smarter than your daddy (you know, any one you care to mention!).'
Einstein is smarter than any of those?

Planck(18), Bohr(22), Broglie(29), Heisenberg(32), Dirac(33), Schroedinger(33), Pauli(45), Yukawa(49), Bloch(52), Yang(57), Landau(62), Feynman(65), Bethe(67), Bardeen(72), Cooper(72), Schrieffer(72), Esaki(73), Giaever(73), Josephson(73), Glashow(79), Weinberg(79), Salam(79), t' Hooft(99), Abrikosov(03), Ginzburg(03).

(in brackets year of nobel prize, all of them are named as strong supporters of a non-deterministic view, based on the certain knowledge, that they either extended QM or drew conclusions of QM, that directly rely upon its randomness, with a lot of nobel prize winners i was not certain if they could be put into a fraction, but i guess many more are QM supporters, as the vast majority of todays physic researchers.)

Hope that is enough arguing from authority for the moment.
And my argument is based at worst on a "current" fashion, that is at least 60 years old.

Carn
Reply With Quote
  #93 (permalink)  
Old 17th January 2005, 11:26 AM
Senior Member
 
Join Date: Sep 2003
Location: Denmark
Posts: 1,218
MRC_Hans is an unknown quantity at this point
Default

Quote:
Originally Posted by Bach
the problem is that research scientists are unable to design an adequate protocol because they don't understand the homeopathic paradigm; aaand, that homeopaths can't design an adequate protocol because they are lousy researchers
Exxcuse me:

1) Scientists design tests for things they don't understand all the time. If we could only test things we understood, we would not have gotten anywhere at all.

2) Why should homeopaths be lousy researchers? Research methodology is relatively simple text-book stuff. And IF you are right, then why exactly should we believe those lousy researchers when they say homeopathy works ?

Hans
__________________
You have a right to your own opinion, but not to your own facts.
Reply With Quote
  #94 (permalink)  
Old 17th January 2005, 11:56 AM
Member
 
Join Date: Oct 2004
Location: germany
Posts: 98
Carn is on a distinguished road
Default

Quote:
Originally Posted by MRC_Hans
Exxcuse me:

1) Scientists design tests for things they don't understand all the time. If we could only test things we understood, we would not have gotten anywhere at all.
I, at least, was talking about the situation where the limited understanding does interfere with the test setup.
E.g. if i claimed, that some goats can fly, if at a beach in the night with a star constellation fitting the birth sign of the goat and if the energetic vibrations of the sand are a perfect match of the rolling of the waves, with the energetic vibrations of the wind being in phase with the spirit of the goat,
then a researcher, who would want to test my claim, would need to know what sort of astrology i'm following, identify the correct constellations for some goats and would need to know what i mean with that babble about energetic vibrations and so on.
Otherwise he could not design the correct test.

Of course the simpler solution is to get hold of me, put me upon a beach together with a pack of goats and leave me a mobile, so i can call him when a good night is there.
But this does not seem to work with homeopathy, though so far i'm not certain why homeopaths think it doesn't work(e.g. get homeopaths do all the diagnosing, prescribing and weighging stuff in a DBPC except for the statistical analysis.)

Most simple of course would be to tell me to shut up and come back, when i have made a reliable test about my flying goats.
Something that did not bring many fruits with homeopathy so far.

Quote:
Originally Posted by MRC_Hans
2) Why should homeopaths be lousy researchers? Research methodology is relatively simple text-book stuff. And IF you are right, then why exactly should we believe those lousy researchers when they say homeopathy works ?

Hans
Thanks, i was uncertain how to put that question nicely, no the blame falls upon you.

Carn
Reply With Quote
  #95 (permalink)  
Old 17th January 2005, 03:30 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

hi carn,

i forgot about arguing on the basis of verbosity! in which case you win, even as compared to moi. but i will skip around and answer in snippets.

just one for now, the faith you put in hans' witty inquiry, why we should trust homeopaths, who are lousy researchers, regarding homeopathy:

a. they are not trained to be researchers
b. 'you' guys are, and still muck it up
c. research methodology is simple, perhaps (which is arguable), but matching it up appropriately to the measurement of real world objects is not
d. it all assumes anyway that you understand the subject, which you guys often claim is irrelevant, but you seem to acknowledge the problem, which is refreshing:

"a researcher, who would want to test my claim, would need to know what sort of astrology i'm following, identify the correct constellations for some goats and would need to know what i mean with that babble about energetic vibrations and so on.
Otherwise he could not design the correct test."

which is largely what i've been saying about the failure of dbpc to measure well-documented effects, because you're using a yardstick that has been marked up randomly, or at least with numbers that reference processes in conventional medicine, that are not germaine to processes in homeopathy.

bach
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #96 (permalink)  
Old 17th January 2005, 03:32 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

hi carn,

yeh, einstein's smarter than all of them - put together.

ADD: that's why he's einstein, and they're not. they're not newton, either, nor copernicus. and i think in any alternative universe, most of them never could be. and don't be mistaken, i am not calling them stupid, just not einstein - i mean, you could take that as a 'slam' on all the other brains you've listed, but give me some credit: i'm not trying to dis' planck and company, after all!

you will also remember, i acknowledged daddy einsteing was no 'patzer' at statistics himself, so you didn't really have to make that point, but still he came down on the side of determinism, and for some purposes at least recognized that statistics was a 'make do' strategy, since we were not in a position to actually observe the actual object of our desire.

as far as 60 years of fashion - a blink of the eye, and a few grains of sand still disturbing your tranquility anyway, alternate visions still hanging in there.


bach

p.s. sorry for the late (several hours later) addition, but i didn't want to post it a second time - though maybe i should have - sorry for any confusion.
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #97 (permalink)  
Old 17th January 2005, 03:39 PM
Sarah-I's Avatar
Member
 
Join Date: Jul 2004
Location: UK
Posts: 126
Sarah-I is on a distinguished road
Default

Hans witty? Don't make me laugh!!!!!! Arrogant and deluded? Yes, certainly, but witty? No, I don't think so!!!
__________________
Sarah-I. RN, Homeopath, Craniosacral Therapist, Therapeutic Massage Therapist, Reiki Master Teacher.
Reply With Quote
  #98 (permalink)  
Old 17th January 2005, 03:59 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

hans,

yes, scientists test for things they don't understand, all the time. and often struggle until they find a correct methodology, that fits the case. in this, statistical research is like basic research - a fairly mundane example, being the numerous unsatisfactory experiments conducted by edison, until he found a servicable filament. this is also the same process we pursue in clinical practice, analyzing data and working with it until we find the correct, or at least a servicable, understanding of a problem, and a useful intervention. needless to say, knowledge of the subject will continue to expand ... even as there have been improvements upon the original light bulb.

but scientists don't test things that they don't know about. unless you think they do? unless you think they can? first they have to postulate it. or happen upon it unexpectedly, as with the discovery of aggregates of molecules produced during serial dilution.

you might also try to fit carn's observations into your schema, regarding the need to understand the details of his goat universe, before one could test it.


carn,

can a flat earther understand orbital dynamics? the answer is 'yes,' to the extent that he can regurgitate what he has been fed, or fed himself, and even perform, possibly, sophisticated calculations. but if in his heart of hearts, he believes it's all b.s., then i propose his 'understanding' is inadequate and his demonstration of 'understanding' a facade, a sham.

in short, you still have to be able to weigh the evidence appropriately, as compared, for example, to hans' position, which he has finally made explicit over at randiland, that he rejects evidence provided from clinical practice. the flat earther, by comparison, might simply indicate his belief that the curved glass of the telescope distorts your measurements.

weighing the evidence is part of bias: if you believe in your 'instrumentation,' statistical or clinical or whatever, then you ascribe a certain degree of heuristic value to observations you make using that technology; if you don't believe a particular instrumentation, or methodology, is valid, then you disregard its measurements and disbelieve conclusions made on that basis, in favor of conclusions you make on the basis of your own methodology.

needless to say, the 'faithful,' especially if they are scientifically inclined, can adduce countless 'facts' in support of their findings, and in support of their belief that the outcomes of their own methodology are superior to the outcomes of the other guy's methodology. ok, regression to the mean, observer bias, blah blah blah. it doesn't refute nuthin, just restates your own faith in your own mantra.

bach
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #99 (permalink)  
Old 17th January 2005, 10:18 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

quote (carn): "The only thing where a "yes/no" answer is the only satisfying result is "is a sub-avagadro homeopathic remedy different from carrier substance?". Anything else but yes or no does not make sense with that question."


yes, we are in agreement on this, which is why i think a proving trial is the best shot at a good trial. i have previously even stated that i thought a good trial should guarantee a positive outcome for homeopathy, but after my lengthy exchange with hans (http://www.otherhealth.com/showthread.php?t=3453&page=2) i started thinking, geeessh, the confounders are really (potentially) pretty significant even in this case, so i have modified my claim, and now say that i think that a proving trial probably should be capable of showing positive results for verum.

for anything else, the record of statistical research into clinical practice of all sorts is simply, in my experience, so shoddy as to render response to your hypotheticals easy: no, whatever you come up with will not convince me to trust your method over homeopathic (or other clinical) method, because even when they seem really clever, research trials too often end up asserting outcomes that aren't justified on the basis of the actual experiment and its conceptualization, even if the numbers you guys produce appear to suggest otherwise.

bach
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #100 (permalink)  
Old 17th January 2005, 11:02 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

Quote:
Originally Posted by Carn
Question is what is more imperfect.



How do you know that explanations of case studies in complex cases are correct?
E.g. the case study at http://www.hpathy.com/casesnew/khopa...physagria1.asp concludes that the improvement was due to 5 doses of some remedy(actually only for one is named when it was given). But i fail to see the point that disproves the possiblity, that she mainly improved due to the psychological effects of finally talking about all the stuff that worried her through the years and her husband finally getting a idea what her problems are.
Hey, you know far more about that than i, read the case study and tell me, could such a improvement be intiated and enhanced solely by psychotherapy?

thank you carn, for trying to address these questions with something more than the glib challenges that usually characterize skeptical challenges to clinical case taking, and case reporting, and treatment itself. seriously, you wiggle your way into the process, maybe not a lot, but a tad further than is ordinarily the case. i hope you enjoy discussing possibilities, though, as much as challenging unfamiliar beliefs, for that is, i suggest, the avenue to learning:

in brief, you are sill asking a yes/no question: "But i fail to see the point that disproves the possiblity, that she mainly improved due to the psychological effects of finally talking about all the stuff that worried her through the years and her husband finally getting a idea what her problems are." now, carn, HOLD ONTO YOUR HAT: i don't know either. chew on that for awhile. it's obviously important - but i'm in a hurry right now, so i'll just tantalize you with it.

but let me add, that with the dbpc, you also don't know, and can't find out. read the freud case study i've linked you to (over at randiland?). the guy makes approximately 3 trillion attributions of cause and effect. can we trust any of them to be 'true?' what does 'true' mean? how would you find out in a dbpc, whether any of these interpretations were 'true?'

in a way, i agree, absolutely as i've tried to explain it, that clinical process is not repeatable, but its methodology, including the case study process, is systematized, and does provide a certain measure of repetition. you don't have to create a million different rocks: that's all you've got, so you work with them. but on the other side, you can not do a repeatable experiment on most treatment interventions at all: if a premature interpretation scares away a patient, how do you design a trial that measures that process, to prove whether a premature interpration scares a patient away? by definition, 'premature' is defined in context of personality of therapist + personality of patient + patient's sensibilities + course and progress of treatment to date + potential 'shock value' of the content of the interpretation + handling of the error in the sessions following + etc.

i can see it now: ok, we'll give this group a premature interpretation one time a week, on monday mornings at 10AM ...

Whether something gives alternative explanation or not is only relevant for the well being of our world views, not at all, and this is where my comment above comes in again: we do not know whether the interpretation is true. in psychotherapy, we do not know whether any interpretation is necessarily true, though in all these connections, we operate essentially within a 'confidence range.' but an important point to consider, is that in treatment there are many things going on at the same time. so more than one interpreation may be possible, with both, or all of them, enjoying status to one degree or another as 'true.' the crucial thing is correct/wrong(or any nasty state between).

you see, part of the problem with statistical research is the assumption that it can really find out the 'truth,' because very often, though not always, there are many truths to be had at the same time.

there was an interesting (and credible, imo) reseach study some years ago, that found that patients at a clinic, who only went through the registration process, but never made an appointment when their turn in the waiting list came up, made the same degree of progress as patients who stuck around and actually entered a course of psychotherapy at the clinic. the interpretation was that being listened to, even if only by the intake clerk, combined with fortuitous life events in subsequent months and years, and a couple of other factors, accomplished 'therapeutic' ends.

and that's fine with me. there's lots that goes on that helps. true also in homeopathy. but in homeopathy, there's a more direct influence, of course, and that's the remedy. does the remedy have an effect above placebo? answer: yes. problem: develop a protocol that is up to the challenge of measuring it. because efficacy is one of those fundamental points for which there is a yes/no answer.


That is the reason why independant confirmation for any result is fundamental. You explain give your yardstick to someone else and he closely looks for errors, can create his own ones and compare them and measure the stones as well. This gives some chances to recognize a wrongly marking, you simply underestimate the blindness of training, the blindness that comes from thinking you've got it all covered, so that you stop wondering where the problem is. though there can never be a guarantee. so that words like 'can never be a guarantee' end up spawning a presumption that, no, we've really got it nailed, just it isn't good form to deny the infinitesimal chance that it really is wrong. Carn: have you ever been wrong about something? yes, of course. can you have predicted ahead of time what your mistake would be? of your current beliefs, will you change your mind about one or more of them some day? can you predict which ones? i don't think so.

But with case studies you have a method of creating a new individual markstick for every new rock and never the same rock is measured twice. If your method is flawed,then you will measure each time wrong as well. But you have the further disadvantage, that even if someone has an idea, what is wrong with your method, he could never show if his idea realy changes something, because neither he can remeasure the same stone, nor can he create a yardstick for the same stone and compare it to your yardstick for that stone. A fundamental flaw could be in there without any chance to detect it.

of course. but there are more double checks than you understand.

You know that is the crux, we do not know exactly how the placebo effect looks like in practice and therefore case studies can never accurately take its effect into account.

But i got a non useful definition of the placebo and verum effect:
Make 2(*) perfect copies of the universe the moment the patient has the idea to go to the doc.
In one universe you do not do anything.
In one universe change the treament, so that one component can have only a very, very little effect(e.g. replace remedies with water/sugar/alcohol or let the patient keep no diet or ...) upon patient, without patient or doc knowing about this(yep, for acupuncture its pretty hard to determine the placebo effect).
In the third you remove the component entirely with both the patient and the doc knowing it is not there, but not knowing it is a experiment.
The difference in objective and subjective health criteria between the 2 patients in the first 2 universes is the verum effect of this component, the placebo effect is the difference between the patient in the second and the third universe.

its interesting to me that skeptics are always creating these wierd scenarios. seems to me, it reflects the impossibility of actually doing anything in reality that offers anywhere near that much clarity.

And delusions can cross cultural borders, e.g. Ufos are now reported world wide, Astrology has long since spanned the world, acupuncture, feng shui and i don't know what asian stuff is running through europe and US(i do not know its all delusions, but certainly some is delusion). I can start a thread about that on Randiland, people will know there more examples, if you need. I guess the suprise would be more if some delusions cannot cross cultural borders, the only difference i expect to be the speed.

And the point about superior bedside manner is actually an advantage in spreading homeopathy, if it is a delusion: Homeopaths are realy nice and you feel well, when being treated by them, that is a definite marketing advantage over the nasty conventional med, who does not even look at you, when asking his students, whether they think your leg has to be amputated.

of course mass delusions are possible. i did not mean to argue otherwise. but i find the specificity of process remarkable even so. interesting, you include in your asian survey, acupunture, which is certainly gaining adherence even amongst conventional docs.


Do you mean observations regarding the health of a patient?
How did you notice this?

Also good observation alone helps little, if the reasoning afterwards gets screwed, both have to be good. yeah, one of my main complaints against the dbpc.

About Einstein, there is something called Bose-Einstein-statistics, Einstein did use statistics, when it was needed and i'm very sceptical, that he would have abided using statistics, if he would have ever tried to find out the truth about homeopathy, as medicine is one of the things that screams for statistics, even when looking at very basic things and therefore especially when looking at complicated things there. see my post several posts up.

Carn
10 characte

bach
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
LOGIN to BB - PLEASE READ jonh General Announcements 0 17th April 2004 01:25 AM


All times are GMT. The time now is 05:21 AM.



The information contained on OtherHealth.com arises by way of discussion between contributors and should not be treated as a substitute for the advice provided by your own personal physician or other health care professional. None of the contributions on this site are an endorsement by the site owners of any particular product, or a recommendation as to how to treat any particular disease or health-related condition. If you suspect you have a disease or health-related condition of any kind, you should contact your own health care professional immediately. Please read the BB Rules for further details.
Please consult personally with your own health care professional before starting any diet, exercise, supplementation or medication program.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
Copyright © 2005-2012 otherhealth.com
For books in the UK visit our sister site Dealpond.com

SEO by vBSEO 3.3.2