otherhealth.com  

Go Back   otherhealth.com > Homeopathy > Research and the Scientific Validity of Homeopathy

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #111 (permalink)  
Old 9th November 2004, 08:15 PM
Senior Member
 
Join Date: Oct 2003
Location: Denmark
Posts: 1,218
MRC_Hans is an unknown quantity at this point
Default

Quote:
blindfolded: are you suggesting that the harversters be misled to believe that all subjects have in fact received verum? or are you suggesting that they should be asked to make their evaluation based on the "as if" assumption, though they would be aware that some subjects actually received placebo? or both, as control? either way, it has some possibilities.
That is exactly what I suggest: The harvesters, who know that they partake in an experiment, evaluate all test subjects in the same way, not knowing which group the belong to. Whether their basic assumption is verum, placebo, or indeterminate is enteirely immaterial. As long as they have no way of skewing the result.

Quote:
nevertheless, at the present time i remain convinced that, done properly, it should not be difficult to achieve a positive outcome with dbpc.
Ahh, but I quite agree. I think it should be piece of cake. But... then why hasn't it been done?

Quote:
btw, you are correct in saying that the brainstorming session must be followed by a session to evaluate practicality ... but with regard to future physics, we are not even close to shutting down the brainstorming phase, unless you want to close the patent office. in fact ... is that a key i see in your hand, poised at the lock...?
If it is, it is only to lock Kayveeh in before he makes a complete fool of himself. You know, brainstorming is not the same as blowing your mind...

Hans
__________________
You have a right to your own opinion, but not to your own facts.
Reply With Quote
  #112 (permalink)  
Old 10th November 2004, 03:48 AM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

quote: "Ahh, but I quite agree. I think it should be piece of cake. But... then why hasn't it been done?"

duh - do we really need to start all over again, with this discussion of the most obvious flaws in the bell research? anyway, what i said was that i thought it should be easy ... if ... it was done right. which is what we've been talking about, remember?

and that's assuming the dbpc really can do it, which ain't necessarily so. it's failed every attempt its made so far to measure established facts of clinical practice, i see no reason to really expect you guys to do better just because i've pointed the way ... to be sure, with your excellent assistance. you know, i have faith in you guys: if there's a wrong turn in the path, you'll take it, and if there's a wrong definition of a term, you'll subscribe to it.

aaanyway,

"The harvesters, who know that they partake in an experiment, evaluate all test subjects in the same way, not knowing which group the belong to."

ok, very interesting - but it's still not the same as doing an actual evaluation in a real proving, in which they do know what they are dealing with, namely, real people taking a real remedy. i'm unsure of the impact on the results of knowing that they may be interpreting a masquarade. once again, superficially i'd expect impact to be trivial ... but ...?
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #113 (permalink)  
Old 10th November 2004, 06:44 AM
Senior Member
 
Join Date: Oct 2003
Location: Denmark
Posts: 1,218
MRC_Hans is an unknown quantity at this point
Default

Quote:
Originally Posted by bwv11
*snip*
aaanyway,

"The harvesters, who know that they partake in an experiment, evaluate all test subjects in the same way, not knowing which group the belong to."

ok, very interesting - but it's still not the same as doing an actual evaluation in a real proving, in which they do know what they are dealing with, namely, real people taking a real remedy. i'm unsure of the impact on the results of knowing that they may be interpreting a masquarade. once again, superficially i'd expect impact to be trivial ... but ...?
Mmm, I think this is central; please explain why it might make a difference in the interpretation that the investigators know the remedy is real?

Hans
__________________
You have a right to your own opinion, but not to your own facts.
Reply With Quote
  #114 (permalink)  
Old 10th November 2004, 09:52 AM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

Quote:
Originally Posted by MRC_Hans
Mmm, I think this is central; please explain why it might make a difference in the interpretation that the investigators know the remedy is real?

Hans
in terms that if we are supposed to be testing things that are going on in the real world, then we ought to be testing ... well, the actual things that are actually going on in the real world ... aaand, in the real world, investigators know their subjects have taken a real remedy. if that is not the case in the trial, then the trial is not duplicating the real situation.

even if the investigators do know the remedy was real, though, they also know that they are interpreting for a trial, in other words they are being watched: and we all know how being watched can make one self-conscious, interfere with normal levels of performance.

so either way, there is a confounder at work that specifically affects verum results.

my hunch, and obviously that's all it is, is that there will therefore be an interference with investigator judgment, in fact, that this is unavoidable in principle. but my hunch is also that the interference will be slight, probably very slight. regardless, it is important to identify all features of a situation, or as many as possible, and to analyze (clinically and/or statistically) effects after the fact.

ignoring any detail is simply sloppy practice.... with regard to the current discussion, i'd willingly agree to even ignore this 'confounder,' assuming its presence will be insignificant.

a negative result to the trial, however, might make me reconsider. after all, as i've stated, i consider it to be a failure of statistical method, to have so far failed to confirm well established facts of clinical practice. if your methods fail again, even with a trial that seems on the surface to meet all of my objections, then we must imho investigate further, to determine what we've overlooked, in evaluating suitability of dbpc, for example, or the specific protocol that's been implemented.
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #115 (permalink)  
Old 10th November 2004, 02:50 PM
Senior Member
 
Join Date: Oct 2003
Location: Denmark
Posts: 1,218
MRC_Hans is an unknown quantity at this point
Default

in terms that if we are supposed to be testing things that are going on in the real world, then we ought to be testing ... well, the actual things that are actually going on in the real world ... aaand, in the real world, investigators know their subjects have taken a real remedy. if that is not the case in the trial, then the trial is not duplicating the real situation.

Uhhh, isn't that the same as saying that testing cannot be done? OK, you later say you assume the difference will be small, but then what is the problem? The purpose is to compare verum and placebo, so even if the evaluation may differ somewhat from normal, there ought to be a distinctive difference betwen the groups.

so either way, there is a confounder at work that specifically affects verum results.

Why do you feel it affects verum more that placebo? I mean why do YOU think so? After all you expect that the verum has a real-world effect. I, of course expect it to affect the verum, becaue I don't expect that the verum effect is in the interpretation. But the very point of it all is that it won't matter what we expect. The real-world effects are what matters for the result.

*snip*

a negative result to the trial, however, might make me reconsider. after all, as i've stated, i consider it to be a failure of statistical method, to have so far failed to confirm well established facts of clinical practice.

Are you saying here that you will attribute a negative result to the test method, no matter how careful the test is designed? If that is the case, then there is no reason to conduct tests, as far as you are concerned.


if your methods fail again, even with a trial that seems on the surface to meet all of my objections, then we must imho investigate further, to determine what we've overlooked, in evaluating suitability of dbpc, for example, or the specific protocol that's been implemented.

Or, we could conclude that the remedy had no effect different from placebo.

Hans
__________________
You have a right to your own opinion, but not to your own facts.
Reply With Quote
  #116 (permalink)  
Old 10th November 2004, 06:44 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

Quote:
Originally Posted by MRC_Hans
in terms that if we are supposed to be testing things that are going on in the real world, then we ought to be testing ... well, the actual things that are actually going on in the real world ... aaand, in the real world, investigators know their subjects have taken a real remedy. if that is not the case in the trial, then the trial is not duplicating the real situation.

Uhhh, isn't that the same as saying that testing cannot be done? no, not the same thing at all. for example, testing something like effect of a medication on neurotransmitters, or absorption of this or that nutrient, the process itself is so narrowly constrained that lab research effectively 'duplicates' real life circumstances, that is, 'environment' of the experiment vs real experience is not a factor in the chemical process. this is similar, though more extreme, to testing allopathic medications, because their effects are more targeted - less of what i would call 'selection bias' in a list such as used in the bell research, as well as other confounders. likewise, measuring results in problem focussed psychotherapy is more easily achieved than measuring results in a dynamic therapy, in which outcomes are more variable, process is stretched out over lengthy time periods, and course of treatment is less certain at the outset because so many individualized elements end up entering into it.OK, you later say you assume the difference will be small, but then what is the problem? things don't always work out the way i expect. The purpose is to compare verum and placebo, so even if the evaluation may differ somewhat from normal, there ought to be a distinctive difference betwen the groups.
i agree. but things don't always work out as i expect.

so either way, there is a confounder at work that specifically affects verum results.

Why do you feel it affects verum more that placebo? I mean why do YOU think so? After all you expect that the verum has a real-world effect. I, of course expect it to affect the verum, becaue I don't expect that the verum effect is in the interpretation. But the very point of it all is that it won't matter what we expect. The real-world effects are what matters for the result. there is no 'real world effect,' only a real world effect that is mediated through perception and interpretation. i would expect mistakes to affect outcome in favor of placebo for two reasons, assuming (my opinion, after all) that verum does have a real effect: 1) a mistaken interpretation of a placebo response adds to the symptom count harvested from the placebo group; 2) a mistaken interpretation of a verum response subtracts from the symptom count harvested from the verum group.

i really would not expect the effect of this to be significant, but a couple of things make me reserve judgement: first, i've been wrong before; second, i'm tempted to say that i'd expect at least a limited number of such errors to occur; third, i am exhaustively familiar with the way in which behavior can be affected by circumstance, such as a harvester feeling he is being watched, or being distracted by uncertainty of what he is interpreting; fourth, i just don't know how sensitive the dbpc is to this particular confounder, or to any other. it may be that it is more sensitive than i would assume.


*snip*

a negative result to the trial, however, might make me reconsider. after all, as i've stated, i consider it to be a failure of statistical method, to have so far failed to confirm well established facts of clinical practice.

Are you saying here that you will attribute a negative result to the test method, no matter how careful the test is designed? yes, but please note: i simply do not have the faith in statistical measures that you have, specifically in measuring dynamic processes. your arguments - your own faith in your own methods - does not convince me that the errors of past experiments were insignificant, and it doesn't convince me that the mountains of scientific evidence supporting my convictions should be overturned. If that is the case, then there is no reason to conduct tests, as far as you are concerned.

not at all. i firmly believe that dbpc ought to be able to measure remedy effect in a well-conducted proving trial. but i am skeptical (i guess that's the word, after all) enough of your processes that i wouldn't be upset to find out i was wrong. i am even less sanguine about your capability to measure process - that is, treatment, though even there, i think methodology ought to be capable of being adapted to the unique requirements of an alternative paradigm: note also that, to test homeopathy, you have to test it according to it's own paradigm, otherwise you are not testing homeopathy. it is complicated, it may not succeed in measuring everything you want it to measure, but there should certainly be a range of effectiveness for statistical methodology.

as far as i am concerned, knowledge gleaned from any source is still knowledge gleaned. i don't turn my nose up at any of it. i believe statistical research is typically very badly designed in its applications to clinical practice, though results such as that you found regarding vioxx support the notion of the value of such testing. i am confident you could provide many more examples, and i would personally enjoy seeing statistical measures achieve a success, and start to contribute in that way to the progress in clinical practices.

besides, it would be most gratifying to see the results of so many studies overturned - studies that have achieved statistical reliability at the expense of clinical credibility.

if your methods fail again, even with a trial that seems on the surface to meet all of my objections, then we must imho investigate further, to determine what we've overlooked, in evaluating suitability of dbpc, for example, or the specific protocol that's been implemented.

Or, we could conclude that the remedy had no effect different from placebo.

yes, of course we could. off hand, however, i suspect that you are more inclined to reach that conclusion than i am.

Hans
10 characte
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #117 (permalink)  
Old 11th November 2004, 07:27 AM
Senior Member
 
Join Date: Oct 2003
Location: Denmark
Posts: 1,218
MRC_Hans is an unknown quantity at this point
Default

*Snipping extensively*
but things don't always work out as i expect.

I know that feeling .

so either way, there is a confounder at work that specifically affects verum results.

*snip*
i would expect mistakes to affect outcome in favor of placebo for two reasons, assuming (my opinion, after all) that verum does have a real effect: 1) a mistaken interpretation of a placebo response adds to the symptom count harvested from the placebo group; 2) a mistaken interpretation of a verum response subtracts from the symptom count harvested from the verum group.

I would expect that there would be both false positives and false negatives in both groups. Ideally they should cancel out.

i really would not expect the effect of this to be significant, but a couple of things make me reserve judgement: first, i've been wrong before; second, i'm tempted to say that i'd expect at least a limited number of such errors to occur; third, i am exhaustively familiar with the way in which behavior can be affected by circumstance, such as a harvester feeling he is being watched, or being distracted by uncertainty of what he is interpreting; fourth, i just don't know how sensitive the dbpc is to this particular confounder, or to any other. it may be that it is more sensitive than i would assume.

Yes, the experimental setting will influence the outcome, no doubt about that. It is a basic law of physics that you cannot measure anything without influencing it, but the purpose of the dbpc preocedure is exactly to minimize such effects by balancing them out. As for sensitivity, well that is mainly a question of group sizes. The bigger the groups, the better the sensitivity.


Are you saying here that you will attribute a negative result to the test method, no matter how careful the test is designed? yes, but please note: i simply do not have the faith in statistical measures that you have, specifically in measuring dynamic processes.

You know, counting symptoms is hardly advanced math. The only point where slightly involved statistic calculation comes into this is when it is figued out how significant the result is, in other words, how likely it is to be due to chance. All the rest is primary school math.

note also that, to test homeopathy, you have to test it according to it's own paradigm, otherwise you are not testing homeopathy.

But surely it is not incompatible with the homeopathic paradigm to assume that the verum group will react differently from the placebo group?

if your methods fail again, even with a trial that seems on the surface to meet all of my objections, then we must imho investigate further, to determine what we've overlooked, in evaluating suitability of dbpc, for example, or the specific protocol that's been implemented.

Or, we could conclude that the remedy had no effect different from placebo.

yes, of course we could. off hand, however, i suspect that you are more inclined to reach that conclusion than i am.

Well, of course. For me it would just be a confirmation. That's life.

Note however, that my position is not that homeopathy does not work, only that the remedies don't work.


Hans
__________________
You have a right to your own opinion, but not to your own facts.
Reply With Quote
  #118 (permalink)  
Old 11th November 2004, 01:11 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

Quote:
Originally Posted by MRC_Hans
*Snipping extensively*

I would expect that there would be both false positives and false negatives in both groups. Ideally they should cancel out.
in principle, of course, you are right, but even in 'objective' research, each experimental situation, or subject, is unique - one must still 'individualize' (you should pardon the term) the test design and interpretation schemes. as between verum and placebo, an error in the placebo group can only be a false positive; of course, in the verum group, i was referencing the fact that an error would be a false negative, though it is more accurate to say that there could also be false positives - i.e., some verum provers will produce placebo results, and some will produce verum results, and some will produce both. assuming for simplicity that false positives on placebo responses of verum provers, balance off with the false positives from the placebo provers, that still leaves the body of verum responses, and with regard to this group there can only be false negatives, thus handicapping verum performace.

naturally, this situation also holds true in testing conventional medications, but in that situation i would argue that there is less room for interpretation, and therefore less room for error, because the action of conventional meds is designed for specific responses: in short, we know pretty much exactly what we're looking for; also, as the action of conventional meds is more "mechanical," as compared to "dynamic," measurement of effect should be subject to less error of observation, measurement, interpretation.

Yes, the experimental setting will influence the outcome, no doubt about that. It is a basic law of physics that you cannot measure anything without influencing it, but the purpose of the dbpc preocedure is exactly to minimize such effects by balancing them out. As for sensitivity, well that is mainly a question of group sizes. The bigger the groups, the better the sensitivity.

well, yes, except we don't really know how large the effect will be, therefore how large our sample will need to be. as i've reflected on this during the present discussion, i have become increasingly uncomfortable with my working assumption, that it would be relatively easy to control for confounders with careful design: as in my initial comments on the bell research, i try to be pretty conservative in my claims, in this case suggesting that confounding factors "could be" a contributor "to one degree or another" etc to failure to show positive results for homeopathy.

going over the material again makes me feel a stronger statement could be made, that such confounders were clearly present, and that they are difficult to control. even so, past trials i have seen appear so poorly constructed from this pov, that i continue to believe that a fairly rigorous attention to these details would result in different outcomes.



You know, counting symptoms is hardly advanced math. The only point where slightly involved statistic calculation comes into this is when it is figued out how significant the result is, in other words, how likely it is to be due to chance. All the rest is primary school math.

i am not and have never been concerned for your computational skills. remember, i am talking about the credibility of the instrument, and therefore the numbers that come out of the trial. wrong numbers + perfect calculation = wrong answer.

note also that, to test homeopathy, you have to test it according to it's own paradigm, otherwise you are not testing homeopathy.

But surely it is not incompatible with the homeopathic paradigm to assume that the verum group will react differently from the placebo group?

correct. the problem, though, is in the observation/perception/measurement of those responses.

Note however, that my position is not that homeopathy does not work, only that the remedies don't work.

now cut that out! haven't you learned yet, that you can't give your own ideosyncratic definition to terms, and then continue the discussion as though you and your co-discussant were still talking about the same thing? if you want to talk about homeopathy, then you have to talk about it in terms of its remedies. if the remedies were to turn out to be water, and all the homeopaths were convinced, then we could go about redefining terms. for now, as you still (i believe?) inhabit the real world, you would do better with a coinage such as "homeopathic placebo therapy."

Hans
10 characte
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
  #119 (permalink)  
Old 11th November 2004, 02:48 PM
Senior Member
 
Join Date: Oct 2003
Location: Denmark
Posts: 1,218
MRC_Hans is an unknown quantity at this point
Default

Quote:
now cut that out! haven't you learned yet, that you can't give your own ideosyncratic definition to terms,
I trust, however, that I am free to state my present working assumption, just like you state yours, OK? It has nothing to do with definition of terms.

Hans
__________________
You have a right to your own opinion, but not to your own facts.
Reply With Quote
  #120 (permalink)  
Old 11th November 2004, 03:17 PM
Senior Member
 
Join Date: May 2002
Location: USA
Posts: 1,020
bwv11
Default

quote: "I trust, however, that I am free to state my present working assumption, just like you state yours, OK? absolutely. It has nothing to do with definition of terms." of course it does. "homeopathy" is healing through the instrumentation of potentized remedies. the statement you made, to which i object, was that your "...position is not that homeopathy does not work, only that the remedies don't work." well, that's a contradiction in terms. your working assumption, if i might rephrase it for grammatical precision, is that "homeopathy does not work, because it's remedies don't work." ipso facto. in the nature of things.

add: i stand by my suggestion for your use of the term "homeopathic placebo therapy."
__________________
"The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Recognition and Validity of Healers "Helipraktiker" in USA Dirk Quartemont Research and the Scientific Validity of Homeopathy 1 17th July 2005 01:06 PM
On the Scientific Mechanism Timokay Coffee Shop 154 28th October 2003 03:41 PM
On the Scientific Mechanism Timokay Research and the Scientific Validity of Homeopathy 1 7th July 2003 10:51 AM
Possible scientific basis for homeopathy DocScott Research and the Scientific Validity of Homeopathy 24 27th February 2003 06:15 PM


All times are GMT. The time now is 08:09 PM.



The information contained on OtherHealth.com arises by way of discussion between contributors and should not be treated as a substitute for the advice provided by your own personal physician or other health care professional. None of the contributions on this site are an endorsement by the site owners of any particular product, or a recommendation as to how to treat any particular disease or health-related condition. If you suspect you have a disease or health-related condition of any kind, you should contact your own health care professional immediately. Please read the BB Rules for further details.
Please consult personally with your own health care professional before starting any diet, exercise, supplementation or medication program.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
Copyright 2005-2012 otherhealth.com
For books in the UK visit our sister site Dealpond.com

SEO by vBSEO 3.3.2