id
stringlengths 1
5
| prompt
listlengths 1
1
| completion
listlengths 1
1
| Label
stringclasses 3
values |
---|---|---|---|
518
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial of artemether-lumefantrine \n\n ABSTRACT.BACKGROUND:\nThe use of antimalarial drug combinations with artemisinin derivatives is recommended to overcome drug resistance in Plasmodium falciparum. The fixed combination of oral artemether-lumefantrine, an artemisinin combination therapy (ACT) is highly effective and well tolerated. It is the only registered fixed combination containing an artemisinin. The trial presented here was conducted to monitor the efficacy of the six-dose regimen of artemether-lumefantrine (ALN) in an area of multi-drug resistance, along the Thai-Myanmar border.\n\nABSTRACT.METHODS:\nThe trial was an open-label, two-arm, randomized study comparing artemether-lumefantrine and mefloquine-artesunate for the treatment of uncomplicated falciparum malaria with 42 days of follow up. Parasite genotyping by polymerase chain reaction (PCR) was used to distinguish recrudescent from newly acquired P. falciparum infections. The PCR adjusted cure rates were evaluated by survival analysis.\n\nABSTRACT.RESULTS:\nIn 2001–2002 a total of 490 patients with slide confirmed uncomplicated P. falciparum malaria were randomly assigned to receive artemether-lumefantrine (n = 245) or artesunate and mefloquine (n = 245) and were followed for 42 days. All patients had rapid initial clinical and parasitological responses. In both groups, the PCR adjusted cure rates by day 42 were high: 98.8% (95% CI 96.4, 99.6%) for artemether-lumefantrine and 96.3% (95% CI 93.1, 98.0%) for artesunate-mefloquine. Both regimens were very well tolerated with no serious adverse events observed attributable to either combination.\n\nABSTRACT.CONCLUSION:\nOverall, this study confirms that these two artemisinin-based combinations remain highly effective and result in equivalent therapeutic responses in the treatment of highly drug-resistant falciparum malaria.\n\nBODY.INTRODUCTION:\nMulti-drug resistance of Plasmodium falciparum is a major health problem in many countries and the number of drugs available, effective and affordable is very limited [1]. Along the Thai-Myanmar border, P. falciparum has developed resistance to almost all available antimalarials [2]. As in tuberculosis and HIV where resistance to drugs is a serious issue, combination therapy has been applied to malaria treatment [3]. The use of antimalarial drug combinations with artemisinin derivatives has been advocated and is now implemented in many countries [4]. An extensive amount of information on efficacy and safety of mefloquine has been reported and reviewed [5]. Artemisinin or Qinghaosu is an extract of the medical plant Qinghao (Artemisia annua), which together with its derivatives, artesunate and artemether are the most active antimalarial compounds to date [6]. The artemisinin derivatives have a rapid onset of therapeutic effect, where a single dose can reduce the parasite biomass by a factor of approximately 104 every 48 hours. In addition, they have a very short terminal elimination half-life of less than 2 hours [7]. Previous studies showed that once-daily administration with artemisinin derivatives produced equivalent cure rates to more frequent administration [8]. A three-day course of artesunate combined with high dose mefloquine has become the standard treatment combination for P. falciparum infections in Thailand [9]. Oral artesunate-mefloquine is the most widely used combination. More recently, a fixed combination of oral artemether-lumefantrine (formerly known as benflumetol) has become available. Artemether is a methyl-ether derivative of artemisinin. Lumefantrine is a racemic fluorine derivative with high blood schizontocidal activity [10]. Both artemisinin combination therapies (ACT) are highly effective and well tolerated [11]. However, resistance to mefloquine and/or to lumefantrine, would compromise both combinations. Therefore it is important to monitor the therapeutic efficacy and thus provide advance warning in case of change. The trial presented here was conducted to monitor the efficacy of the six-dose regimen of artemether-lumefantrine combination given over three days for the treatment of uncomplicated P. falciparum infections in adults and children on the western border of Thailand.\n\nBODY.PATIENTS AND METHODS:\nThis study was conducted in the Maela and Mawker Tai malaria clinics of the Shoklo Malaria Research Unit (Mae Sot, Thailand)between July 2001 and June 2002. Patients were recruited from two populations: displaced people of the Karen ethnic minority and migrant workers living on the western border of Thailand. This is an area of low and unstable transmission of Plasmodium vivax and multi-drug-resistant P. falciparum [12]. The trial was an open-label, two-arm, randomized study comparing artemether-lumefantrine andmefloquine-artesunate. This study was approved by the Ethical and Scientific Committees of the Faculty of Tropical Medicine, Mahidol University.\n\nBODY.PATIENTS AND METHODS.PROCEDURES:\nPatients >10 kg in weight who had slide-confirmed acute P. falciparum malaria were included in the study, provided that they or their guardians gave fully informed written consent intheir own language, they were not pregnant, they had not received mefloquine in the previous 63 days and there were no other clinical or laboratory signs of severe illness and/or severe and complicated malaria [13]. If they gave written informed consent, they were allocated randomly to receive either the six-dose regimen of artemether-lumefantrine (Coartem® 20/120, Novartis Pharma AG, Basel, Switzerland) or mefloquine (Lariam®, Hoffman-La Roche, Basel, Switzerland) plus artesunate (Guilin Pharmaceutical Factory No.1, Guilin, China). At enrolment (Day 0), a medical history was obtained, a full physical examination was performed and blood was taken for quantitative parasite counts and routine haematology (finger prick blood sample for malaria smear and haematocrit). All information was recorded on a standard case record form. All patients were examined and blood smears were taken daily until they became aparasitaemic, and then weekly for 6 weeks. At each visit a questionnaire on adverse events was completed. A blood smear was also taken from any patient complaining of fever or symptoms compatible with malaria during the follow-up period. Parasite counts were determined on Giemsa-stained thick and thin blood films. The person-gametocyte-weeks were calculated per 1,000 person-weeks after excluding the episodes on admission and during treatment.\n\nBODY.PATIENTS AND METHODS.DRUG REGIMENS:\nComputerized randomization was in blocks of ten. Patients allocated to artemether-lumefantrine group (ALN) received the tablets at 0 and 8 hours and twice daily for the following 2 days. Artemether-lumefantrine was dispensed as a fixed dose combination tablet. Each tablet contained 20 mg of artemether and 120 mg of lumefantrine. The number of tablets was given according to the body weight. The minimum dosage for patients weighing less than 15 kg was one tablet per dose; patients between 15 and 24 kg received two tablets, those between 25 and 34 kg received three tablets and patients 35 kg and above were treated with four tablets per dose. Patients allocated to artesunate-mefloquine group (MAS3) received artesunate, 4 mg/kg oncedaily for 3 days (day 0 was the first day of treatment), plus mefloquine, 15 mg/kg on day 1 and 10 mg/kg on day 2. Each patient was given antipyretics and cooled by tepid sponging if the tympanic temperature was equal or above 37.5°C before drug administration. Drug administration was observed in all patients and if vomiting occurred in less than 30 min, administration of the full dose was repeated, if vomiting occurred between 30 and 60 min, half the dose was repeated. Patients treated with artemether-lumefantrine were given a glass of chocolate milk (200 ml) with each dose to increase absorption [14].\n\nBODY.PATIENTS AND METHODS.OUTCOME MEASURES:\nThe primary therapeutic outcome measure in this study was the incidence of microscopically and genotypically confirmedrecrudescent infections in both treatment groups by day 42. Parasite genotyping by the polymerase chain reaction (PCR) was used to distinguish recrudescent from newly acquired P. falciparum infections. P. falciparum infections were genotyped for allelic variation in three polymorphic antigen loci, merozoite surface proteins 1 and 2 (MSP-1 and MSP-2) and glutamate rich protein (GLURP), on admission and in case of parasite reappearance [15,16]. Secondary measures were the immediate treatment responses: parasite clearance, fever clearance, incidence of adverse events, and degree of anaemia. The sample size was calculated to detect a difference in failure rates of 7 % between the two regimens with 90% CI and 80 % power assuming a 20% drop out.\n\nBODY.PATIENTS AND METHODS.ADVERSE EVENTS:\nAdverse events were symptoms or signs that were not presenton admission and that developed after the start of treatment. All adverse events, including those probably related to malaria, were recording and compared among treatment groups. The rates of early vomiting (<1 h) after each dose and for each drug wererecorded and compared among the groups in the analysis.\n\nBODY.PATIENTS AND METHODS.MANAGEMENT OF RECRUDESCENT INFECTIONS:\nPatients with uncomplicated recrudescent infections were re-treated with artesunate, 2 mg/kg/day for 7 days; patients >8 years oldalso received doxycycline, 4 mg/kg/day for 7 days.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nData were analysed using SPSS for Windows, version 11. Categorical data were compared using the Chi-square test with Yates' correction or by Fisher's exact test, as appropriate. Continuous variables conforming to a normal distribution were compared using Student's t test. Data not normally distributed were log-transformed or compared using the Mann-Whitney U test. The relative risks were calculated using cross-tabulations. The rates of adverse events at three different periods (days 1–2, days 3–7 and days 14–42) were compared among treatment groups. For each of the three periods, the events were counted only once (e.g., if a patient vomited on day one and day two, this was counted as one adverse event). The PCR-adjusted cure rates were evaluated by survival analysis and compared using the log-rank test. Patients for whom PCR genotyping was either inconclusive or missing were censored in the survival analysis on the day of parasite reappearance. For all statistical tests the significance level (p) was set at 0.05.\n\nBODY.RESULTS:\nFour hundred and ninety patients with uncomplicated P. falciparum infections were enrolled between July 2001 and May 2002. The artesunate-mefloquine and artemether-lumefantrine groups included 245 patients each, the age range for all patients was 2–72 years. Baseline characteristics were similar in both groups (Table 1). In total, 484 patients (242 each group) were included in the final evaluation. Six patients (three in each group) were excluded for the following reasons; withdrew consent (1), non-compliance e.g. failure to complete trial treatment course (4), failure to meet protocol criteria (1). Overall compliance was good; around 99% of the patients in the study (481 of 484) were seen at the day seven scheduled visit, 96.1% (465 patients) were seen at day 28 and 93.4% (452 patients) were seen at day 42. Table 1 Demographic and baseline characteristics ALN* (n = 245) MAS3** (n = 245) Maela 100 100 Mawker Tai 145 145 Males, no. (%) 172 (70) 164 (67) Age, years Mean (SD) 23.2 (14.6) 23.6 (15.1) Range 3–70 2–72 Age group, no. (%) <5 10 (4.1) 8 (3.3) 5–14 81 (33.1) 75 (30.6) >14 154 (62.9) 162 (66.1) Weight, kg Mean (SD) 42.4 (14.5) 42.3 (15.0) Range 10–77 10–78 Temperature, °C Mean (SD) 37.7 (1.0) 37.8 (1.1) Range 35.6–40.5 35.9–41.0 Fever a , no (%) 136 (55.5) 142 (58.0) Haematocrit, % Mean (SD) 36.7 (5.9) 36.4 (5.8) Range 21–52 20–51 Geometric mean (range) 8,047 7,570 parasite count (μl -1 ) (32–198,789) (16–198,789) Hepatomegaly, no (%) 48 (19.6) 50 (20.4) Splenomegaly, no (%) 70 (28.6) 57 (23.3) * ALN = artemether-lumefantrine, ** MAS3 = artesunate-mefloquine a Tympanic temperature ≥ 37.5°C \n\nBODY.RESULTS.CLINICAL AND PARASITOLOGICAL FINDINGS:\nThe initial responses to the two treatment groups were similar. None of the patients developed severe malaria. On admission, 55.0% (133/242) of the patients on ALN and 57.9% (140/242) of the patients in the MAS3 group had a tympanic temperature ≥ 37.5°C. All except three patients had a normal temperature on day 3 (2 in ALN and 1 in MAS3). There was no difference in fever clearance times between the two treatment groups (Figure 1). Parasite clearance times were short and most patients cleared their parasitaemia by day two. Figure 2 shows the percentage of patients with positive slide for asexual P. falciparum in both groups. By day three, four (1.8%) of 227 patients in the artemether-lumefantrine recipients and three (1.3%) of 238 artesunate-mefloquine recipients still had a positive blood film (P > 0.05). Overall, 12.3 % of patients were anaemic (haematocrit <30%) on admission, 10.8% in ALN group and 13.8% in MAS3 group (P = 0.33). The mean (SD) decrease in haematocrit value at day seven from baseline was greater in the group receiving MAS3 than in the ALN group: 9.3% (SD,11.5%; 95% CI, 7.7% to 10.9%) compared with 6.7% (SD, 11.4%; 95% CI, 5.1 to 8.3%) respectively (P = 0.023). Figure 1Percentage of patients with fever (temperature > 37.5°C). Figure 2Percentage of patients with positive slide for asexual P. falciparum forms. During the 42-day follow-up period, 27 new P. falciparum infections occurred among artemether-lumefantrine and 24 among artesunate-mefloquine recipients (P > 0.05) (Table 2). The PCR-adjusted cure rates by day 42 were 98.8% (95% CI, 96.4% to 99.6%) in the ALN group and 96.3% (95% CI, 93.1% to 98.0%) in the MAS3 group (P = 0.08). PCR confirmed treatment failures were more likely in children aged below 15 years than in adults (RR, 5.1; 95% CI, 1.4–18.7; P = 0.006). The mean age was 13.6 years (n = 12; SD = 8.5) in patients with treatment failure and 23.7 years (n = 438; SD = 15.2) in successfully treated patients. In this trial, only age group was independently associated with treatment failure, but not other factors e.g. higher parasitaemia (>10,000/μL); anaemia (haematocrit <30%); fever (tympanic temp ≥ 37.5°C) on admission; sites; treatment groups; early vomiting (within one hour following drug administration). The mediantime to recrudescence was comparable for MAS3 group (21 days; n = 9; range, 14–28 days) and ALN group (28 days; n = 3; range, 21–42 days; P > 0.05).\n\nBODY.RESULTS.OTHER PARASITOLOGICAL FINDINGS:\nOf 452 patients, 119 (26.3%) had P. vivax parasitaemia detected during follow up. There were significantly fewer cases of vivax malaria in the MAS3 group (29 of 227) than in the ALN group (90 of 225) (P < 0.001). The median time to appearance of P. vivax parasitaemia was longer in the MAS3 group (40 days; range, 13–43 days) than in the ALN group (28 days; range, 14–43 days; P < 0.001). Twenty patients (8.3%) in the artemether-lumefantrine group and 19 (7.9%) in the artesunate-mefloquine group had gametocytes detected during the first 3 days. All except one patient in ALN group cleared gametocytes within first week after start of treatment. After excluding these, gametocytes developed (between day 7 and 42) in 1.2% (3/241) of ALN group and 1.3 (3/240) of MAS3 group. The person-gametocyte weeks were low and similar: 2.7 (95% CI, 0.6–7.8) per 1000 person-weeks for both groups.\n\nBODY.RESULTS.ADVERSE EVENTS:\nBoth treatment regimens were well tolerated. No serious adverse events were reported. Overall, 5/242(2.1%) of the patients vomited one or more doses of medication in the ALN group and 2/242(0.8%) of the MAS3 treated patients (RR, 2.5; 95% CI, 0.5–12.7; P = 0.45). The rates of early vomiting (within one hour) of the drugs were very low (around 2%) and did not differ among groups (one in each group on day 2). The most commonly reported and possibly drug-related adverse events to both combination therapies were effects on the gastrointestinal (abdominal pain, anorexia, nausea, diarrhoea and late vomiting e.g. >1 h after administration of treatment) and central nervous system (headache, dizziness). Figure 3 shows the proportions of possibly drug-related adverse events of those who did not have those symptoms at admission during follow-up in both groups. Overall, there were less adverse events in ALN group compared to MAS3, though the differences were not statistically significant. Figure 3Possibly drug-related adverse events (day 1 – day 42).\n\nBODY.DISCUSSION:\nThe loss of affordable effective antimalarial drugs to resistance represents a major threat to the people of malaria endemic countries [1]. Using ineffective drugs with high failure rates kills many and is unacceptable. A clear treatment policy and readiness to use the new, more effective artemisinin-based combination therapies (ACT) are crucial [17]. Along the north-western border of Thailand, where highly multi-drug resistant isolates of P. falciparum are found, artesunate-mefloquine combination therapy (MAS3) is the standard treatment regimen for uncomplicated falciparum malaria [9]. Early diagnosis and treatment with an artemisinin-based drug combination of very high efficacy that reduces gametocyte carriage, has led to a marked decline in the incidence of falciparum malaria and a reversal of the previous trend toward increasing mefloquine resistance [18]. Artemisinin derivatives will ensure rapid clinical and parasitological responses and are remarkably effective, hence clinical deterioration is extremely unusual. To optimize therapy, combination with a slower-acting antimalarial drug is required. Systematic use of ACT would help to delay the emergence of resistance if the drug was used widely. However, the continued use of mefloquine monotherapy or with only 2 days of artesunate in this region, provides persistent selective pressure to continue the evolution of mefloquine resistance, which could diminish the efficacy of the artesunate-mefloquine combination and that of artemether-lumefantrine. Artemether-lumefantrine has been introduced recently for oral treatment of uncomplicated falciparum malaria. In Thailand, several trials have been conducted with this combination. The six-dose schedule provides sustained blood lumefantrine levels and thus improved cure rates [11]. Lumefantrine is highly lipophilic and the oral bioavailability varies considerably between individuals and increases greatly if the drug is administered after a meal rich in fat [19]. The present trial reconfirmed the efficacy of the six-dose regimen of artemether-lumefantrine given over three days [20]. Both treatments in this study cleared fever and parasitaemia promptly and reliably. Both treatments were well tolerated and highly effective. Importantly 2/3 less P. vivax infections and 12 days longer median time to appearance of P. vivax parasitaemia in the MAS3 group were most probably due to the longer terminal half-life of mefloquine compared to lumefantrine [21,22]. More data on the safety and efficacy of artemether-lumefantrine in very small children and pregnant women are needed.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRH carried out the study and analyzed the data. RH, EAA, RMG, PS, TJ, NJW, FN conceived the study, participated in its design and co-ordination and contributed to draft the manuscript. LP, KLT assisted in collection of data. AB performed the PCR experiments. All authors read and approved the final manuscript. Table 2 Treatment response. Treatment group ALN* (n = 245) MAS3** (n = 245) Compliance, no. (%) Completed day 7 241 (99.6%) 240 (99.2%) Completed day 28 232 (95.9%) 233 (96.3%) Completed day 42 225 (93.0%) 227 (93.4%) Cumulative proportion of patients with clinical failure, no (%) Day 7 0 (0) 0 (0) Day 28 13 (5.6) 14 (6.0) Day 42 27 (12.0) 24 (10.6) PCR, no. Novel 23 14 Recrudescent 2 8 Novel + recrudescent 1 1 Indeterminate/missing 1 1 PCR-adjusted cure rates, no. (%) Day 7 0 (100) 0 (100) Day 28 2 (99.1) 9(96.1) Day 42 3 (98.8) 9 (96.3) * ALN = artemether-lumefantrine, ** MAS3 = artesunate-mefloquine\n\n**Question:** Compared to Mefloquine-artesunate what was the result of Artemether-lumefantrine on Fever clearance times?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
520
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial of artemether-lumefantrine \n\n ABSTRACT.BACKGROUND:\nThe use of antimalarial drug combinations with artemisinin derivatives is recommended to overcome drug resistance in Plasmodium falciparum. The fixed combination of oral artemether-lumefantrine, an artemisinin combination therapy (ACT) is highly effective and well tolerated. It is the only registered fixed combination containing an artemisinin. The trial presented here was conducted to monitor the efficacy of the six-dose regimen of artemether-lumefantrine (ALN) in an area of multi-drug resistance, along the Thai-Myanmar border.\n\nABSTRACT.METHODS:\nThe trial was an open-label, two-arm, randomized study comparing artemether-lumefantrine and mefloquine-artesunate for the treatment of uncomplicated falciparum malaria with 42 days of follow up. Parasite genotyping by polymerase chain reaction (PCR) was used to distinguish recrudescent from newly acquired P. falciparum infections. The PCR adjusted cure rates were evaluated by survival analysis.\n\nABSTRACT.RESULTS:\nIn 2001–2002 a total of 490 patients with slide confirmed uncomplicated P. falciparum malaria were randomly assigned to receive artemether-lumefantrine (n = 245) or artesunate and mefloquine (n = 245) and were followed for 42 days. All patients had rapid initial clinical and parasitological responses. In both groups, the PCR adjusted cure rates by day 42 were high: 98.8% (95% CI 96.4, 99.6%) for artemether-lumefantrine and 96.3% (95% CI 93.1, 98.0%) for artesunate-mefloquine. Both regimens were very well tolerated with no serious adverse events observed attributable to either combination.\n\nABSTRACT.CONCLUSION:\nOverall, this study confirms that these two artemisinin-based combinations remain highly effective and result in equivalent therapeutic responses in the treatment of highly drug-resistant falciparum malaria.\n\nBODY.INTRODUCTION:\nMulti-drug resistance of Plasmodium falciparum is a major health problem in many countries and the number of drugs available, effective and affordable is very limited [1]. Along the Thai-Myanmar border, P. falciparum has developed resistance to almost all available antimalarials [2]. As in tuberculosis and HIV where resistance to drugs is a serious issue, combination therapy has been applied to malaria treatment [3]. The use of antimalarial drug combinations with artemisinin derivatives has been advocated and is now implemented in many countries [4]. An extensive amount of information on efficacy and safety of mefloquine has been reported and reviewed [5]. Artemisinin or Qinghaosu is an extract of the medical plant Qinghao (Artemisia annua), which together with its derivatives, artesunate and artemether are the most active antimalarial compounds to date [6]. The artemisinin derivatives have a rapid onset of therapeutic effect, where a single dose can reduce the parasite biomass by a factor of approximately 104 every 48 hours. In addition, they have a very short terminal elimination half-life of less than 2 hours [7]. Previous studies showed that once-daily administration with artemisinin derivatives produced equivalent cure rates to more frequent administration [8]. A three-day course of artesunate combined with high dose mefloquine has become the standard treatment combination for P. falciparum infections in Thailand [9]. Oral artesunate-mefloquine is the most widely used combination. More recently, a fixed combination of oral artemether-lumefantrine (formerly known as benflumetol) has become available. Artemether is a methyl-ether derivative of artemisinin. Lumefantrine is a racemic fluorine derivative with high blood schizontocidal activity [10]. Both artemisinin combination therapies (ACT) are highly effective and well tolerated [11]. However, resistance to mefloquine and/or to lumefantrine, would compromise both combinations. Therefore it is important to monitor the therapeutic efficacy and thus provide advance warning in case of change. The trial presented here was conducted to monitor the efficacy of the six-dose regimen of artemether-lumefantrine combination given over three days for the treatment of uncomplicated P. falciparum infections in adults and children on the western border of Thailand.\n\nBODY.PATIENTS AND METHODS:\nThis study was conducted in the Maela and Mawker Tai malaria clinics of the Shoklo Malaria Research Unit (Mae Sot, Thailand)between July 2001 and June 2002. Patients were recruited from two populations: displaced people of the Karen ethnic minority and migrant workers living on the western border of Thailand. This is an area of low and unstable transmission of Plasmodium vivax and multi-drug-resistant P. falciparum [12]. The trial was an open-label, two-arm, randomized study comparing artemether-lumefantrine andmefloquine-artesunate. This study was approved by the Ethical and Scientific Committees of the Faculty of Tropical Medicine, Mahidol University.\n\nBODY.PATIENTS AND METHODS.PROCEDURES:\nPatients >10 kg in weight who had slide-confirmed acute P. falciparum malaria were included in the study, provided that they or their guardians gave fully informed written consent intheir own language, they were not pregnant, they had not received mefloquine in the previous 63 days and there were no other clinical or laboratory signs of severe illness and/or severe and complicated malaria [13]. If they gave written informed consent, they were allocated randomly to receive either the six-dose regimen of artemether-lumefantrine (Coartem® 20/120, Novartis Pharma AG, Basel, Switzerland) or mefloquine (Lariam®, Hoffman-La Roche, Basel, Switzerland) plus artesunate (Guilin Pharmaceutical Factory No.1, Guilin, China). At enrolment (Day 0), a medical history was obtained, a full physical examination was performed and blood was taken for quantitative parasite counts and routine haematology (finger prick blood sample for malaria smear and haematocrit). All information was recorded on a standard case record form. All patients were examined and blood smears were taken daily until they became aparasitaemic, and then weekly for 6 weeks. At each visit a questionnaire on adverse events was completed. A blood smear was also taken from any patient complaining of fever or symptoms compatible with malaria during the follow-up period. Parasite counts were determined on Giemsa-stained thick and thin blood films. The person-gametocyte-weeks were calculated per 1,000 person-weeks after excluding the episodes on admission and during treatment.\n\nBODY.PATIENTS AND METHODS.DRUG REGIMENS:\nComputerized randomization was in blocks of ten. Patients allocated to artemether-lumefantrine group (ALN) received the tablets at 0 and 8 hours and twice daily for the following 2 days. Artemether-lumefantrine was dispensed as a fixed dose combination tablet. Each tablet contained 20 mg of artemether and 120 mg of lumefantrine. The number of tablets was given according to the body weight. The minimum dosage for patients weighing less than 15 kg was one tablet per dose; patients between 15 and 24 kg received two tablets, those between 25 and 34 kg received three tablets and patients 35 kg and above were treated with four tablets per dose. Patients allocated to artesunate-mefloquine group (MAS3) received artesunate, 4 mg/kg oncedaily for 3 days (day 0 was the first day of treatment), plus mefloquine, 15 mg/kg on day 1 and 10 mg/kg on day 2. Each patient was given antipyretics and cooled by tepid sponging if the tympanic temperature was equal or above 37.5°C before drug administration. Drug administration was observed in all patients and if vomiting occurred in less than 30 min, administration of the full dose was repeated, if vomiting occurred between 30 and 60 min, half the dose was repeated. Patients treated with artemether-lumefantrine were given a glass of chocolate milk (200 ml) with each dose to increase absorption [14].\n\nBODY.PATIENTS AND METHODS.OUTCOME MEASURES:\nThe primary therapeutic outcome measure in this study was the incidence of microscopically and genotypically confirmedrecrudescent infections in both treatment groups by day 42. Parasite genotyping by the polymerase chain reaction (PCR) was used to distinguish recrudescent from newly acquired P. falciparum infections. P. falciparum infections were genotyped for allelic variation in three polymorphic antigen loci, merozoite surface proteins 1 and 2 (MSP-1 and MSP-2) and glutamate rich protein (GLURP), on admission and in case of parasite reappearance [15,16]. Secondary measures were the immediate treatment responses: parasite clearance, fever clearance, incidence of adverse events, and degree of anaemia. The sample size was calculated to detect a difference in failure rates of 7 % between the two regimens with 90% CI and 80 % power assuming a 20% drop out.\n\nBODY.PATIENTS AND METHODS.ADVERSE EVENTS:\nAdverse events were symptoms or signs that were not presenton admission and that developed after the start of treatment. All adverse events, including those probably related to malaria, were recording and compared among treatment groups. The rates of early vomiting (<1 h) after each dose and for each drug wererecorded and compared among the groups in the analysis.\n\nBODY.PATIENTS AND METHODS.MANAGEMENT OF RECRUDESCENT INFECTIONS:\nPatients with uncomplicated recrudescent infections were re-treated with artesunate, 2 mg/kg/day for 7 days; patients >8 years oldalso received doxycycline, 4 mg/kg/day for 7 days.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nData were analysed using SPSS for Windows, version 11. Categorical data were compared using the Chi-square test with Yates' correction or by Fisher's exact test, as appropriate. Continuous variables conforming to a normal distribution were compared using Student's t test. Data not normally distributed were log-transformed or compared using the Mann-Whitney U test. The relative risks were calculated using cross-tabulations. The rates of adverse events at three different periods (days 1–2, days 3–7 and days 14–42) were compared among treatment groups. For each of the three periods, the events were counted only once (e.g., if a patient vomited on day one and day two, this was counted as one adverse event). The PCR-adjusted cure rates were evaluated by survival analysis and compared using the log-rank test. Patients for whom PCR genotyping was either inconclusive or missing were censored in the survival analysis on the day of parasite reappearance. For all statistical tests the significance level (p) was set at 0.05.\n\nBODY.RESULTS:\nFour hundred and ninety patients with uncomplicated P. falciparum infections were enrolled between July 2001 and May 2002. The artesunate-mefloquine and artemether-lumefantrine groups included 245 patients each, the age range for all patients was 2–72 years. Baseline characteristics were similar in both groups (Table 1). In total, 484 patients (242 each group) were included in the final evaluation. Six patients (three in each group) were excluded for the following reasons; withdrew consent (1), non-compliance e.g. failure to complete trial treatment course (4), failure to meet protocol criteria (1). Overall compliance was good; around 99% of the patients in the study (481 of 484) were seen at the day seven scheduled visit, 96.1% (465 patients) were seen at day 28 and 93.4% (452 patients) were seen at day 42. Table 1 Demographic and baseline characteristics ALN* (n = 245) MAS3** (n = 245) Maela 100 100 Mawker Tai 145 145 Males, no. (%) 172 (70) 164 (67) Age, years Mean (SD) 23.2 (14.6) 23.6 (15.1) Range 3–70 2–72 Age group, no. (%) <5 10 (4.1) 8 (3.3) 5–14 81 (33.1) 75 (30.6) >14 154 (62.9) 162 (66.1) Weight, kg Mean (SD) 42.4 (14.5) 42.3 (15.0) Range 10–77 10–78 Temperature, °C Mean (SD) 37.7 (1.0) 37.8 (1.1) Range 35.6–40.5 35.9–41.0 Fever a , no (%) 136 (55.5) 142 (58.0) Haematocrit, % Mean (SD) 36.7 (5.9) 36.4 (5.8) Range 21–52 20–51 Geometric mean (range) 8,047 7,570 parasite count (μl -1 ) (32–198,789) (16–198,789) Hepatomegaly, no (%) 48 (19.6) 50 (20.4) Splenomegaly, no (%) 70 (28.6) 57 (23.3) * ALN = artemether-lumefantrine, ** MAS3 = artesunate-mefloquine a Tympanic temperature ≥ 37.5°C \n\nBODY.RESULTS.CLINICAL AND PARASITOLOGICAL FINDINGS:\nThe initial responses to the two treatment groups were similar. None of the patients developed severe malaria. On admission, 55.0% (133/242) of the patients on ALN and 57.9% (140/242) of the patients in the MAS3 group had a tympanic temperature ≥ 37.5°C. All except three patients had a normal temperature on day 3 (2 in ALN and 1 in MAS3). There was no difference in fever clearance times between the two treatment groups (Figure 1). Parasite clearance times were short and most patients cleared their parasitaemia by day two. Figure 2 shows the percentage of patients with positive slide for asexual P. falciparum in both groups. By day three, four (1.8%) of 227 patients in the artemether-lumefantrine recipients and three (1.3%) of 238 artesunate-mefloquine recipients still had a positive blood film (P > 0.05). Overall, 12.3 % of patients were anaemic (haematocrit <30%) on admission, 10.8% in ALN group and 13.8% in MAS3 group (P = 0.33). The mean (SD) decrease in haematocrit value at day seven from baseline was greater in the group receiving MAS3 than in the ALN group: 9.3% (SD,11.5%; 95% CI, 7.7% to 10.9%) compared with 6.7% (SD, 11.4%; 95% CI, 5.1 to 8.3%) respectively (P = 0.023). Figure 1Percentage of patients with fever (temperature > 37.5°C). Figure 2Percentage of patients with positive slide for asexual P. falciparum forms. During the 42-day follow-up period, 27 new P. falciparum infections occurred among artemether-lumefantrine and 24 among artesunate-mefloquine recipients (P > 0.05) (Table 2). The PCR-adjusted cure rates by day 42 were 98.8% (95% CI, 96.4% to 99.6%) in the ALN group and 96.3% (95% CI, 93.1% to 98.0%) in the MAS3 group (P = 0.08). PCR confirmed treatment failures were more likely in children aged below 15 years than in adults (RR, 5.1; 95% CI, 1.4–18.7; P = 0.006). The mean age was 13.6 years (n = 12; SD = 8.5) in patients with treatment failure and 23.7 years (n = 438; SD = 15.2) in successfully treated patients. In this trial, only age group was independently associated with treatment failure, but not other factors e.g. higher parasitaemia (>10,000/μL); anaemia (haematocrit <30%); fever (tympanic temp ≥ 37.5°C) on admission; sites; treatment groups; early vomiting (within one hour following drug administration). The mediantime to recrudescence was comparable for MAS3 group (21 days; n = 9; range, 14–28 days) and ALN group (28 days; n = 3; range, 21–42 days; P > 0.05).\n\nBODY.RESULTS.OTHER PARASITOLOGICAL FINDINGS:\nOf 452 patients, 119 (26.3%) had P. vivax parasitaemia detected during follow up. There were significantly fewer cases of vivax malaria in the MAS3 group (29 of 227) than in the ALN group (90 of 225) (P < 0.001). The median time to appearance of P. vivax parasitaemia was longer in the MAS3 group (40 days; range, 13–43 days) than in the ALN group (28 days; range, 14–43 days; P < 0.001). Twenty patients (8.3%) in the artemether-lumefantrine group and 19 (7.9%) in the artesunate-mefloquine group had gametocytes detected during the first 3 days. All except one patient in ALN group cleared gametocytes within first week after start of treatment. After excluding these, gametocytes developed (between day 7 and 42) in 1.2% (3/241) of ALN group and 1.3 (3/240) of MAS3 group. The person-gametocyte weeks were low and similar: 2.7 (95% CI, 0.6–7.8) per 1000 person-weeks for both groups.\n\nBODY.RESULTS.ADVERSE EVENTS:\nBoth treatment regimens were well tolerated. No serious adverse events were reported. Overall, 5/242(2.1%) of the patients vomited one or more doses of medication in the ALN group and 2/242(0.8%) of the MAS3 treated patients (RR, 2.5; 95% CI, 0.5–12.7; P = 0.45). The rates of early vomiting (within one hour) of the drugs were very low (around 2%) and did not differ among groups (one in each group on day 2). The most commonly reported and possibly drug-related adverse events to both combination therapies were effects on the gastrointestinal (abdominal pain, anorexia, nausea, diarrhoea and late vomiting e.g. >1 h after administration of treatment) and central nervous system (headache, dizziness). Figure 3 shows the proportions of possibly drug-related adverse events of those who did not have those symptoms at admission during follow-up in both groups. Overall, there were less adverse events in ALN group compared to MAS3, though the differences were not statistically significant. Figure 3Possibly drug-related adverse events (day 1 – day 42).\n\nBODY.DISCUSSION:\nThe loss of affordable effective antimalarial drugs to resistance represents a major threat to the people of malaria endemic countries [1]. Using ineffective drugs with high failure rates kills many and is unacceptable. A clear treatment policy and readiness to use the new, more effective artemisinin-based combination therapies (ACT) are crucial [17]. Along the north-western border of Thailand, where highly multi-drug resistant isolates of P. falciparum are found, artesunate-mefloquine combination therapy (MAS3) is the standard treatment regimen for uncomplicated falciparum malaria [9]. Early diagnosis and treatment with an artemisinin-based drug combination of very high efficacy that reduces gametocyte carriage, has led to a marked decline in the incidence of falciparum malaria and a reversal of the previous trend toward increasing mefloquine resistance [18]. Artemisinin derivatives will ensure rapid clinical and parasitological responses and are remarkably effective, hence clinical deterioration is extremely unusual. To optimize therapy, combination with a slower-acting antimalarial drug is required. Systematic use of ACT would help to delay the emergence of resistance if the drug was used widely. However, the continued use of mefloquine monotherapy or with only 2 days of artesunate in this region, provides persistent selective pressure to continue the evolution of mefloquine resistance, which could diminish the efficacy of the artesunate-mefloquine combination and that of artemether-lumefantrine. Artemether-lumefantrine has been introduced recently for oral treatment of uncomplicated falciparum malaria. In Thailand, several trials have been conducted with this combination. The six-dose schedule provides sustained blood lumefantrine levels and thus improved cure rates [11]. Lumefantrine is highly lipophilic and the oral bioavailability varies considerably between individuals and increases greatly if the drug is administered after a meal rich in fat [19]. The present trial reconfirmed the efficacy of the six-dose regimen of artemether-lumefantrine given over three days [20]. Both treatments in this study cleared fever and parasitaemia promptly and reliably. Both treatments were well tolerated and highly effective. Importantly 2/3 less P. vivax infections and 12 days longer median time to appearance of P. vivax parasitaemia in the MAS3 group were most probably due to the longer terminal half-life of mefloquine compared to lumefantrine [21,22]. More data on the safety and efficacy of artemether-lumefantrine in very small children and pregnant women are needed.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRH carried out the study and analyzed the data. RH, EAA, RMG, PS, TJ, NJW, FN conceived the study, participated in its design and co-ordination and contributed to draft the manuscript. LP, KLT assisted in collection of data. AB performed the PCR experiments. All authors read and approved the final manuscript. Table 2 Treatment response. Treatment group ALN* (n = 245) MAS3** (n = 245) Compliance, no. (%) Completed day 7 241 (99.6%) 240 (99.2%) Completed day 28 232 (95.9%) 233 (96.3%) Completed day 42 225 (93.0%) 227 (93.4%) Cumulative proportion of patients with clinical failure, no (%) Day 7 0 (0) 0 (0) Day 28 13 (5.6) 14 (6.0) Day 42 27 (12.0) 24 (10.6) PCR, no. Novel 23 14 Recrudescent 2 8 Novel + recrudescent 1 1 Indeterminate/missing 1 1 PCR-adjusted cure rates, no. (%) Day 7 0 (100) 0 (100) Day 28 2 (99.1) 9(96.1) Day 42 3 (98.8) 9 (96.3) * ALN = artemether-lumefantrine, ** MAS3 = artesunate-mefloquine\n\n**Question:** Compared to Mefloquine-artesunate what was the result of Artemether-lumefantrine on New P. falciparum infections during the 42-day follow-up?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
521
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial of artemether-lumefantrine \n\n ABSTRACT.BACKGROUND:\nThe use of antimalarial drug combinations with artemisinin derivatives is recommended to overcome drug resistance in Plasmodium falciparum. The fixed combination of oral artemether-lumefantrine, an artemisinin combination therapy (ACT) is highly effective and well tolerated. It is the only registered fixed combination containing an artemisinin. The trial presented here was conducted to monitor the efficacy of the six-dose regimen of artemether-lumefantrine (ALN) in an area of multi-drug resistance, along the Thai-Myanmar border.\n\nABSTRACT.METHODS:\nThe trial was an open-label, two-arm, randomized study comparing artemether-lumefantrine and mefloquine-artesunate for the treatment of uncomplicated falciparum malaria with 42 days of follow up. Parasite genotyping by polymerase chain reaction (PCR) was used to distinguish recrudescent from newly acquired P. falciparum infections. The PCR adjusted cure rates were evaluated by survival analysis.\n\nABSTRACT.RESULTS:\nIn 2001–2002 a total of 490 patients with slide confirmed uncomplicated P. falciparum malaria were randomly assigned to receive artemether-lumefantrine (n = 245) or artesunate and mefloquine (n = 245) and were followed for 42 days. All patients had rapid initial clinical and parasitological responses. In both groups, the PCR adjusted cure rates by day 42 were high: 98.8% (95% CI 96.4, 99.6%) for artemether-lumefantrine and 96.3% (95% CI 93.1, 98.0%) for artesunate-mefloquine. Both regimens were very well tolerated with no serious adverse events observed attributable to either combination.\n\nABSTRACT.CONCLUSION:\nOverall, this study confirms that these two artemisinin-based combinations remain highly effective and result in equivalent therapeutic responses in the treatment of highly drug-resistant falciparum malaria.\n\nBODY.INTRODUCTION:\nMulti-drug resistance of Plasmodium falciparum is a major health problem in many countries and the number of drugs available, effective and affordable is very limited [1]. Along the Thai-Myanmar border, P. falciparum has developed resistance to almost all available antimalarials [2]. As in tuberculosis and HIV where resistance to drugs is a serious issue, combination therapy has been applied to malaria treatment [3]. The use of antimalarial drug combinations with artemisinin derivatives has been advocated and is now implemented in many countries [4]. An extensive amount of information on efficacy and safety of mefloquine has been reported and reviewed [5]. Artemisinin or Qinghaosu is an extract of the medical plant Qinghao (Artemisia annua), which together with its derivatives, artesunate and artemether are the most active antimalarial compounds to date [6]. The artemisinin derivatives have a rapid onset of therapeutic effect, where a single dose can reduce the parasite biomass by a factor of approximately 104 every 48 hours. In addition, they have a very short terminal elimination half-life of less than 2 hours [7]. Previous studies showed that once-daily administration with artemisinin derivatives produced equivalent cure rates to more frequent administration [8]. A three-day course of artesunate combined with high dose mefloquine has become the standard treatment combination for P. falciparum infections in Thailand [9]. Oral artesunate-mefloquine is the most widely used combination. More recently, a fixed combination of oral artemether-lumefantrine (formerly known as benflumetol) has become available. Artemether is a methyl-ether derivative of artemisinin. Lumefantrine is a racemic fluorine derivative with high blood schizontocidal activity [10]. Both artemisinin combination therapies (ACT) are highly effective and well tolerated [11]. However, resistance to mefloquine and/or to lumefantrine, would compromise both combinations. Therefore it is important to monitor the therapeutic efficacy and thus provide advance warning in case of change. The trial presented here was conducted to monitor the efficacy of the six-dose regimen of artemether-lumefantrine combination given over three days for the treatment of uncomplicated P. falciparum infections in adults and children on the western border of Thailand.\n\nBODY.PATIENTS AND METHODS:\nThis study was conducted in the Maela and Mawker Tai malaria clinics of the Shoklo Malaria Research Unit (Mae Sot, Thailand)between July 2001 and June 2002. Patients were recruited from two populations: displaced people of the Karen ethnic minority and migrant workers living on the western border of Thailand. This is an area of low and unstable transmission of Plasmodium vivax and multi-drug-resistant P. falciparum [12]. The trial was an open-label, two-arm, randomized study comparing artemether-lumefantrine andmefloquine-artesunate. This study was approved by the Ethical and Scientific Committees of the Faculty of Tropical Medicine, Mahidol University.\n\nBODY.PATIENTS AND METHODS.PROCEDURES:\nPatients >10 kg in weight who had slide-confirmed acute P. falciparum malaria were included in the study, provided that they or their guardians gave fully informed written consent intheir own language, they were not pregnant, they had not received mefloquine in the previous 63 days and there were no other clinical or laboratory signs of severe illness and/or severe and complicated malaria [13]. If they gave written informed consent, they were allocated randomly to receive either the six-dose regimen of artemether-lumefantrine (Coartem® 20/120, Novartis Pharma AG, Basel, Switzerland) or mefloquine (Lariam®, Hoffman-La Roche, Basel, Switzerland) plus artesunate (Guilin Pharmaceutical Factory No.1, Guilin, China). At enrolment (Day 0), a medical history was obtained, a full physical examination was performed and blood was taken for quantitative parasite counts and routine haematology (finger prick blood sample for malaria smear and haematocrit). All information was recorded on a standard case record form. All patients were examined and blood smears were taken daily until they became aparasitaemic, and then weekly for 6 weeks. At each visit a questionnaire on adverse events was completed. A blood smear was also taken from any patient complaining of fever or symptoms compatible with malaria during the follow-up period. Parasite counts were determined on Giemsa-stained thick and thin blood films. The person-gametocyte-weeks were calculated per 1,000 person-weeks after excluding the episodes on admission and during treatment.\n\nBODY.PATIENTS AND METHODS.DRUG REGIMENS:\nComputerized randomization was in blocks of ten. Patients allocated to artemether-lumefantrine group (ALN) received the tablets at 0 and 8 hours and twice daily for the following 2 days. Artemether-lumefantrine was dispensed as a fixed dose combination tablet. Each tablet contained 20 mg of artemether and 120 mg of lumefantrine. The number of tablets was given according to the body weight. The minimum dosage for patients weighing less than 15 kg was one tablet per dose; patients between 15 and 24 kg received two tablets, those between 25 and 34 kg received three tablets and patients 35 kg and above were treated with four tablets per dose. Patients allocated to artesunate-mefloquine group (MAS3) received artesunate, 4 mg/kg oncedaily for 3 days (day 0 was the first day of treatment), plus mefloquine, 15 mg/kg on day 1 and 10 mg/kg on day 2. Each patient was given antipyretics and cooled by tepid sponging if the tympanic temperature was equal or above 37.5°C before drug administration. Drug administration was observed in all patients and if vomiting occurred in less than 30 min, administration of the full dose was repeated, if vomiting occurred between 30 and 60 min, half the dose was repeated. Patients treated with artemether-lumefantrine were given a glass of chocolate milk (200 ml) with each dose to increase absorption [14].\n\nBODY.PATIENTS AND METHODS.OUTCOME MEASURES:\nThe primary therapeutic outcome measure in this study was the incidence of microscopically and genotypically confirmedrecrudescent infections in both treatment groups by day 42. Parasite genotyping by the polymerase chain reaction (PCR) was used to distinguish recrudescent from newly acquired P. falciparum infections. P. falciparum infections were genotyped for allelic variation in three polymorphic antigen loci, merozoite surface proteins 1 and 2 (MSP-1 and MSP-2) and glutamate rich protein (GLURP), on admission and in case of parasite reappearance [15,16]. Secondary measures were the immediate treatment responses: parasite clearance, fever clearance, incidence of adverse events, and degree of anaemia. The sample size was calculated to detect a difference in failure rates of 7 % between the two regimens with 90% CI and 80 % power assuming a 20% drop out.\n\nBODY.PATIENTS AND METHODS.ADVERSE EVENTS:\nAdverse events were symptoms or signs that were not presenton admission and that developed after the start of treatment. All adverse events, including those probably related to malaria, were recording and compared among treatment groups. The rates of early vomiting (<1 h) after each dose and for each drug wererecorded and compared among the groups in the analysis.\n\nBODY.PATIENTS AND METHODS.MANAGEMENT OF RECRUDESCENT INFECTIONS:\nPatients with uncomplicated recrudescent infections were re-treated with artesunate, 2 mg/kg/day for 7 days; patients >8 years oldalso received doxycycline, 4 mg/kg/day for 7 days.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nData were analysed using SPSS for Windows, version 11. Categorical data were compared using the Chi-square test with Yates' correction or by Fisher's exact test, as appropriate. Continuous variables conforming to a normal distribution were compared using Student's t test. Data not normally distributed were log-transformed or compared using the Mann-Whitney U test. The relative risks were calculated using cross-tabulations. The rates of adverse events at three different periods (days 1–2, days 3–7 and days 14–42) were compared among treatment groups. For each of the three periods, the events were counted only once (e.g., if a patient vomited on day one and day two, this was counted as one adverse event). The PCR-adjusted cure rates were evaluated by survival analysis and compared using the log-rank test. Patients for whom PCR genotyping was either inconclusive or missing were censored in the survival analysis on the day of parasite reappearance. For all statistical tests the significance level (p) was set at 0.05.\n\nBODY.RESULTS:\nFour hundred and ninety patients with uncomplicated P. falciparum infections were enrolled between July 2001 and May 2002. The artesunate-mefloquine and artemether-lumefantrine groups included 245 patients each, the age range for all patients was 2–72 years. Baseline characteristics were similar in both groups (Table 1). In total, 484 patients (242 each group) were included in the final evaluation. Six patients (three in each group) were excluded for the following reasons; withdrew consent (1), non-compliance e.g. failure to complete trial treatment course (4), failure to meet protocol criteria (1). Overall compliance was good; around 99% of the patients in the study (481 of 484) were seen at the day seven scheduled visit, 96.1% (465 patients) were seen at day 28 and 93.4% (452 patients) were seen at day 42. Table 1 Demographic and baseline characteristics ALN* (n = 245) MAS3** (n = 245) Maela 100 100 Mawker Tai 145 145 Males, no. (%) 172 (70) 164 (67) Age, years Mean (SD) 23.2 (14.6) 23.6 (15.1) Range 3–70 2–72 Age group, no. (%) <5 10 (4.1) 8 (3.3) 5–14 81 (33.1) 75 (30.6) >14 154 (62.9) 162 (66.1) Weight, kg Mean (SD) 42.4 (14.5) 42.3 (15.0) Range 10–77 10–78 Temperature, °C Mean (SD) 37.7 (1.0) 37.8 (1.1) Range 35.6–40.5 35.9–41.0 Fever a , no (%) 136 (55.5) 142 (58.0) Haematocrit, % Mean (SD) 36.7 (5.9) 36.4 (5.8) Range 21–52 20–51 Geometric mean (range) 8,047 7,570 parasite count (μl -1 ) (32–198,789) (16–198,789) Hepatomegaly, no (%) 48 (19.6) 50 (20.4) Splenomegaly, no (%) 70 (28.6) 57 (23.3) * ALN = artemether-lumefantrine, ** MAS3 = artesunate-mefloquine a Tympanic temperature ≥ 37.5°C \n\nBODY.RESULTS.CLINICAL AND PARASITOLOGICAL FINDINGS:\nThe initial responses to the two treatment groups were similar. None of the patients developed severe malaria. On admission, 55.0% (133/242) of the patients on ALN and 57.9% (140/242) of the patients in the MAS3 group had a tympanic temperature ≥ 37.5°C. All except three patients had a normal temperature on day 3 (2 in ALN and 1 in MAS3). There was no difference in fever clearance times between the two treatment groups (Figure 1). Parasite clearance times were short and most patients cleared their parasitaemia by day two. Figure 2 shows the percentage of patients with positive slide for asexual P. falciparum in both groups. By day three, four (1.8%) of 227 patients in the artemether-lumefantrine recipients and three (1.3%) of 238 artesunate-mefloquine recipients still had a positive blood film (P > 0.05). Overall, 12.3 % of patients were anaemic (haematocrit <30%) on admission, 10.8% in ALN group and 13.8% in MAS3 group (P = 0.33). The mean (SD) decrease in haematocrit value at day seven from baseline was greater in the group receiving MAS3 than in the ALN group: 9.3% (SD,11.5%; 95% CI, 7.7% to 10.9%) compared with 6.7% (SD, 11.4%; 95% CI, 5.1 to 8.3%) respectively (P = 0.023). Figure 1Percentage of patients with fever (temperature > 37.5°C). Figure 2Percentage of patients with positive slide for asexual P. falciparum forms. During the 42-day follow-up period, 27 new P. falciparum infections occurred among artemether-lumefantrine and 24 among artesunate-mefloquine recipients (P > 0.05) (Table 2). The PCR-adjusted cure rates by day 42 were 98.8% (95% CI, 96.4% to 99.6%) in the ALN group and 96.3% (95% CI, 93.1% to 98.0%) in the MAS3 group (P = 0.08). PCR confirmed treatment failures were more likely in children aged below 15 years than in adults (RR, 5.1; 95% CI, 1.4–18.7; P = 0.006). The mean age was 13.6 years (n = 12; SD = 8.5) in patients with treatment failure and 23.7 years (n = 438; SD = 15.2) in successfully treated patients. In this trial, only age group was independently associated with treatment failure, but not other factors e.g. higher parasitaemia (>10,000/μL); anaemia (haematocrit <30%); fever (tympanic temp ≥ 37.5°C) on admission; sites; treatment groups; early vomiting (within one hour following drug administration). The mediantime to recrudescence was comparable for MAS3 group (21 days; n = 9; range, 14–28 days) and ALN group (28 days; n = 3; range, 21–42 days; P > 0.05).\n\nBODY.RESULTS.OTHER PARASITOLOGICAL FINDINGS:\nOf 452 patients, 119 (26.3%) had P. vivax parasitaemia detected during follow up. There were significantly fewer cases of vivax malaria in the MAS3 group (29 of 227) than in the ALN group (90 of 225) (P < 0.001). The median time to appearance of P. vivax parasitaemia was longer in the MAS3 group (40 days; range, 13–43 days) than in the ALN group (28 days; range, 14–43 days; P < 0.001). Twenty patients (8.3%) in the artemether-lumefantrine group and 19 (7.9%) in the artesunate-mefloquine group had gametocytes detected during the first 3 days. All except one patient in ALN group cleared gametocytes within first week after start of treatment. After excluding these, gametocytes developed (between day 7 and 42) in 1.2% (3/241) of ALN group and 1.3 (3/240) of MAS3 group. The person-gametocyte weeks were low and similar: 2.7 (95% CI, 0.6–7.8) per 1000 person-weeks for both groups.\n\nBODY.RESULTS.ADVERSE EVENTS:\nBoth treatment regimens were well tolerated. No serious adverse events were reported. Overall, 5/242(2.1%) of the patients vomited one or more doses of medication in the ALN group and 2/242(0.8%) of the MAS3 treated patients (RR, 2.5; 95% CI, 0.5–12.7; P = 0.45). The rates of early vomiting (within one hour) of the drugs were very low (around 2%) and did not differ among groups (one in each group on day 2). The most commonly reported and possibly drug-related adverse events to both combination therapies were effects on the gastrointestinal (abdominal pain, anorexia, nausea, diarrhoea and late vomiting e.g. >1 h after administration of treatment) and central nervous system (headache, dizziness). Figure 3 shows the proportions of possibly drug-related adverse events of those who did not have those symptoms at admission during follow-up in both groups. Overall, there were less adverse events in ALN group compared to MAS3, though the differences were not statistically significant. Figure 3Possibly drug-related adverse events (day 1 – day 42).\n\nBODY.DISCUSSION:\nThe loss of affordable effective antimalarial drugs to resistance represents a major threat to the people of malaria endemic countries [1]. Using ineffective drugs with high failure rates kills many and is unacceptable. A clear treatment policy and readiness to use the new, more effective artemisinin-based combination therapies (ACT) are crucial [17]. Along the north-western border of Thailand, where highly multi-drug resistant isolates of P. falciparum are found, artesunate-mefloquine combination therapy (MAS3) is the standard treatment regimen for uncomplicated falciparum malaria [9]. Early diagnosis and treatment with an artemisinin-based drug combination of very high efficacy that reduces gametocyte carriage, has led to a marked decline in the incidence of falciparum malaria and a reversal of the previous trend toward increasing mefloquine resistance [18]. Artemisinin derivatives will ensure rapid clinical and parasitological responses and are remarkably effective, hence clinical deterioration is extremely unusual. To optimize therapy, combination with a slower-acting antimalarial drug is required. Systematic use of ACT would help to delay the emergence of resistance if the drug was used widely. However, the continued use of mefloquine monotherapy or with only 2 days of artesunate in this region, provides persistent selective pressure to continue the evolution of mefloquine resistance, which could diminish the efficacy of the artesunate-mefloquine combination and that of artemether-lumefantrine. Artemether-lumefantrine has been introduced recently for oral treatment of uncomplicated falciparum malaria. In Thailand, several trials have been conducted with this combination. The six-dose schedule provides sustained blood lumefantrine levels and thus improved cure rates [11]. Lumefantrine is highly lipophilic and the oral bioavailability varies considerably between individuals and increases greatly if the drug is administered after a meal rich in fat [19]. The present trial reconfirmed the efficacy of the six-dose regimen of artemether-lumefantrine given over three days [20]. Both treatments in this study cleared fever and parasitaemia promptly and reliably. Both treatments were well tolerated and highly effective. Importantly 2/3 less P. vivax infections and 12 days longer median time to appearance of P. vivax parasitaemia in the MAS3 group were most probably due to the longer terminal half-life of mefloquine compared to lumefantrine [21,22]. More data on the safety and efficacy of artemether-lumefantrine in very small children and pregnant women are needed.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRH carried out the study and analyzed the data. RH, EAA, RMG, PS, TJ, NJW, FN conceived the study, participated in its design and co-ordination and contributed to draft the manuscript. LP, KLT assisted in collection of data. AB performed the PCR experiments. All authors read and approved the final manuscript. Table 2 Treatment response. Treatment group ALN* (n = 245) MAS3** (n = 245) Compliance, no. (%) Completed day 7 241 (99.6%) 240 (99.2%) Completed day 28 232 (95.9%) 233 (96.3%) Completed day 42 225 (93.0%) 227 (93.4%) Cumulative proportion of patients with clinical failure, no (%) Day 7 0 (0) 0 (0) Day 28 13 (5.6) 14 (6.0) Day 42 27 (12.0) 24 (10.6) PCR, no. Novel 23 14 Recrudescent 2 8 Novel + recrudescent 1 1 Indeterminate/missing 1 1 PCR-adjusted cure rates, no. (%) Day 7 0 (100) 0 (100) Day 28 2 (99.1) 9(96.1) Day 42 3 (98.8) 9 (96.3) * ALN = artemether-lumefantrine, ** MAS3 = artesunate-mefloquine\n\n**Question:** Compared to Mefloquine-artesunate what was the result of Artemether-lumefantrine on PCR-adjusted cure rates after 42 days?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
523
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Q-TWiST analysis to estimate overall benefit for patients with metastatic renal cell carcinoma treated in a phase III trial of sunitinib \n\n ABSTRACT.BACKGROUND:\nIn a randomised phase III trial of treatment-naive patients with metastatic renal cell carcinoma, sunitinib showed significant improvement in progression-free survival (PFS) compared with interferon (IFN)-α. We assessed between-treatment differences in overall benefit using a quality-adjusted Time Without Symptoms of disease progression or Toxicity of treatment (TWiST; Gelber and Goldhirsch) analysis.\n\nABSTRACT.METHODS:\nIn this analysis, in which only grade 3/4 treatment-related toxicities were included, overall survival was partitioned into three health states: toxicity (time with toxicity after randomisation and before progression), time without symptoms of disease progression or toxicity, and time from progression until death. Between-treatment differences in the mean duration of each state were calculated. A threshold utility analysis was used to assess quality-adjusted TWiST (Q-TWiST) outcomes.\n\nABSTRACT.RESULTS:\nQ-TWiST scores showed that quality-adjusted survival time was greater with sunitinib than with IFN-α, even though certain grade 3/4 toxicities occurred more frequently with sunitinib. For both treatments, the mean number of days with toxicity was small compared with PFS. This effect was more pronounced with sunitinib in which time spent without progression or toxicity was 151 days greater than with IFN-α.\n\nABSTRACT.CONCLUSION:\nPatients randomised to sunitinib had longer clinical benefit, defined as Q-TWiST scores, than patients randomised to IFN-α.\n\nBODY:\nTargeted treatment with antiangiogenic therapy has become standard of care for the treatment of most patients with metastatic clear cell renal cell carcinoma (mRCC) based upon improvements in progression-free survival (PFS) in several randomised phase III clinical trials (Escudier et al, 2007a, 2007b; Motzer et al, 2007; Sternberg et al, 2010). With more than one choice of therapy available for patients with mRCC, clinical decision making is more complex, and data on other outcomes in addition to efficacy, such as quality of life and safety, have become increasingly important. Most often, treatments are compared by a head-to-head analysis of each trial endpoint separately. For purposes of a primary analysis, this approach is reasonable. However, this approach may not be ideal if there are important tradeoffs between endpoints such as an increased time with side effects of treatment but longer time to progression in one arm compared with the other. One method that allows integration of both the quality and quantity of survival time is the Time Without Symptoms of disease progression or Toxicity of treatment or TWiST analysis, and its extension the quality-adjusted TWiST or Q-TWiST (Gelber and Goldhirsch, 1986; Goldhirsch et al, 1989). The primary hypothesis of these methods is that patients with no disease symptoms or treatment toxicity have better health-related quality of life than those who have disease symptoms and toxicity. Q-TWiST was first used to evaluate adjuvant therapy for breast cancer (Gelber et al, 1991), and has since been widely applied to other settings and cancers (Gelber et al, 1996; Rosendahl et al, 1999; Sherrill et al, 2008; Marcus et al, 2010; Zbrozek et al, 2010). In this paper, we report the results of a Q-TWiST analysis from a phase III randomised clinical trial comparing the oral antiangiogenic compound sunitinib (SUTENT; Pfizer, Inc; New York, NY, USA), with interferon-α (IFN-α) as first-line treatment for patients with mRCC (Motzer et al, 2007; Motzer et al, 2009). In this trial, sunitinib showed superior PFS compared with IFN-α (median PFS 11 vs 5 months, P<0.001); in addition, median overall survival with sunitinib was more than 2 years (26.4 months). In general, more adverse events of all grades were reported in the sunitinib arm than in the IFN-α arm, although the proportion of patients experiencing grade 3 or 4 toxicities was relatively low for both treatment groups. The Q-TWiST analysis was used to simultaneously compare the two treatments in terms of PFS, overall survival, and grade 3 or 4 toxicities.\n\nBODY.MATERIALS AND METHODS.PATIENTS AND STUDY DESIGN:\nThe design and main results of the randomised phase III clinical trial have been reported previously (Motzer et al, 2007; Motzer et al, 2009). In this trial, 750 patients with mRCC were randomised in a 1 : 1 ratio to receive either sunitinib or IFN-α. Key patient eligibility criteria included no previous systemic therapy for RCC, measurable disease, an Eastern Cooperative Oncology Group performance status of 0 or 1, as well as adequate hepatic, renal, and cardiac function. All patients provided signed informed consent. The primary endpoint was PFS. Sunitinib was administered orally at an initial dose of 50 mg per day for 4 weeks, followed by 2 weeks off treatment (Schedule 4/2). IFN-α was administered as a subcutaneous injection on three nonconsecutive days per week, starting at 3 million units (MU) for the first week, 6 MU for the second week, and 9 MU thereafter.\n\nBODY.MATERIALS AND METHODS.STATISTICAL METHODS AND ANALYSIS:\nThe Q-TWiST analysis considered three health states, TOX, TWiST, and REL, and the duration of each state was calculated for every patient. The TOX state comprised the total number of days after randomisation and before progression spent with toxicity, regardless of when the toxicity started or whether there were gaps between toxicities. All grade 3 or 4 toxicities attributable to the study drugs were included in the analysis, apart from those starting after progression. The model included only the more severe toxicities because they were the events considered most likely to have more effect on a patient's quality of life. The type, date of onset, and date of resolution of each toxicity were recorded prospectively as a part of the standard procedure in conducting a randomised phase III trial. Time spent with toxicities unresolved by progression was capped at the date of progression. There were 44 sunitinib and 34 IFN-α patients with unresolved toxicities at the time of progression. It is possible for a patient to have more than one type of toxicity during a period of time. Care was taken so that these overlapping toxicity intervals were not double counted. That is, if a patient had a toxicity that lasted from day 1 to day 5 and another toxicity that lasted from day 1 to day 10, the number of days spent with toxicity was 10 days. The TWiST state was defined as PFS time minus time with toxicities. Progression-free survival time was defined from randomisation to the last date of follow-up or progression. Patients without progression were censored at their last date of follow-up (median follow-up was similar in each treatment arm, being 853 days for sunitinib and 863 days for IFN-α). The duration of the relapse or REL state was defined as overall survival time minus PFS time, or the period of time from progression to death. Patients alive at the end of the study were censored for the overall survival endpoint. The mean time spent in each of the three health states was calculated for each treatment arm separately, and a 95% confidence interval for the difference by treatment was calculated using the nonparametric bootstrap method. Progression-free and overall survival curves were generated using Kaplan–Meier methods, which account for differential follow-up. These curves, along with a curve for time on toxicity, were overlaid on a single graph generated separately for each treatment. We used a threshold utility analysis to assess quality-adjusted or Q-TWiST outcomes, in which the TOX and REL health states are each weighted by utility scores or weights. These weights are represented by μTOX and μREL in the equation below: The utility weights reflect the relative value for the TOX and REL states and range from zero to one, with values closer to one discounting fewer days than those closer to zero for each state. A value of one represents a time period that is denoted by patients as a time of perfect health. Conversely, a value of zero represents a time period that is akin to death. Q-TWiST scores were calculated for a combination of utility weights increasing from zero to one by increments of 0.25.\n\nBODY.RESULTS:\nThere were more reported occurrences of most general adverse events of all grades in the sunitinib arm than in the IFN-α arm (Motzer et al, 2007). Figures 1A and B show survival times partitioned into the three health states over the follow-up period separately for sunitinib and IFN-α. In each graph, the overall survival curve (blue) is partitioned by the Kaplan–Meier curves for PFS (red) and time with treatment toxicity (green). The area between the Kaplan–Meier curves gives the average time spent in each of the three health states. The mean number of days spent with grade 3 or 4 toxicity (i.e., TOX) was 27 days higher among patients in the sunitinib arm than in patients from the IFN-α arm (95% CI: 18, 37; Table 1). However, the mean time spent without symptoms of disease progression or toxicity of treatment (i.e., TWiST) was 151 days higher in the sunitinib than in the IFN-α arm (95% CI: 118, 180; Table 1), and the mean time spent in relapse (i.e., REL) was 96 days lower among patients randomised to sunitinib (95% CI: −126, −56; Table 1). Results from a threshold utility analysis where the TOX and REL health states were weighted from 0 to 1 are provided in Table 2. The difference in Q-TWiST ranged from a maximum of 177 ((95% CI: 146, 212; Table 2) weight for TOX=1 and weight for REL=0) to a minimum of 56 ((95% CI: 16, 102; Table 2) weight for TOX=0 and weight for REL=1). The first scenario reflects the case when a patient's quality of life before progression is unaffected by toxicity, but quality of life after progression is severely affected. The second scenario is a reverse of this, when a patient's quality of life before progression is severely affected, but quality of life after progression is unaffected.\n\nBODY.DISCUSSION AND CONCLUSIONS:\nResults from this phase III trial of sunitinib vs IFN-α showed that sunitinib was superior to IFN-α, based on a longer duration of median PFS; overall survival was also longer with sunitinib than with IFN-α, although the difference did not reach statistical significance (Motzer et al, 2009). The results of exploratory analyses were consistent with the hypothesis that the overall survival endpoint was confounded by crossover treatment and use of alternate anticancer drugs after discontinuation. A total of 25 patients from the IFN-α group crossed over to receive sunitinib on study and one-third (117/359=33%) of the patients from the IFN-α group received post-study treatment with sunitinib. The rate of adverse events was low in both treatment groups, but occurred in more patients treated with sunitinib than with IFN-α. This is likely to be attributable to the much longer average duration of therapy with sunitinib than with IFN-α. Quality-of-life scores as measured by the FACT-G and FKSI questionnaires were higher among patients randomised to sunitinib than among those randomised to IFN-α, with minimal regional variation (Cella et al, 2008; Cella et al, 2010). In this Q-TWiST analysis, we integrated efficacy and safety endpoints, and found that patients on the sunitinib arm spent on average 27 more days with grade 3 or 4 treatment-related toxicity than patients on the IFN-α arm. For both treatment arms, the number of days during which a patient experienced toxicity was low compared with the time during which the average patient remained progression-free. This effect was more pronounced for the sunitinib arm; time spent without progression or toxicity was 151 days greater in the sunitinib than in the IFN-α arm. The Q-TWiST analysis provides a way by which we can compare PFS, overall survival, and time spent without toxicity in the two treatment arms in one metric. Patients can value time spent in relapse and time spent undergoing active therapy differently. For the Q-TWiST analysis, we assigned a range of utility weights to reflect 25 such scenarios. A utility weight of one discounts zero days from a health state, while a utility weight of zero discounts all days from a health state. In our analysis, all utility combinations resulted in positive Q-TWiST treatment differences (sunitinib Q-TWiST–IFN-α Q-TWiST). The difference in scores ranged from 177 to 56 days. Interpreted another way, sunitinib had higher quality-adjusted survival times than IFN-α across the entire range of utility combinations. Revicki et al (2006) recommended that differences in Q-TWiST equal to at least 10% of overall survival should be considered clinically meaningful. Based on the median overall survival of 26.4 months or ∼803 days in the sunitinib arm, a 10% or greater difference corresponds to 80 days or greater. In Table 2, Q-TWiST differences in all scenarios apart from five can be deemed as clinically important. Four of the five smallest differences occurred when time from progression to death or end of study was not discounted (REL weight=1), but time before progression was discounted (TOX weight <1). In this Q-TWiST analysis, we used overall utility weights to reflect the average value patients place on time spent in relapse and time spent experiencing toxicity. However, it is possible that there is heterogeneity in how a specific patient would value time spent with toxicity and time spent from progression to death or end of study. Another limitation of our analysis is that patients progressing on IFN-α were given the choice to crossover to the sunitinib arm. This resulted in 25 patients switching to sunitinib during the study and could affect estimation of the REL health state for the IFN-α arm. Q-TWiST methodology can provide useful advice on treatment choice when PFS differences are significant but overall survival differences are not, as is the case with the sunitinib vs IFN-α trial analysed herein. Applying Q-TWiST methodology to examine progression, overall survival, and toxicity as a single metric showed that sunitinib has a greater quality-adjusted survival time than IFN-α. For sunitinib patients, the greater amount of time spent with a toxicity is offset by far longer PFS. These results support the conclusion that sunitinib offers improved clinical and quality-of-life outcomes compared with IFN-α for mRCC patients.\n\n**Question:** Compared to Interferon (IFN)-α what was the result of Sunitinib on Adverse events?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
524
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Q-TWiST analysis to estimate overall benefit for patients with metastatic renal cell carcinoma treated in a phase III trial of sunitinib \n\n ABSTRACT.BACKGROUND:\nIn a randomised phase III trial of treatment-naive patients with metastatic renal cell carcinoma, sunitinib showed significant improvement in progression-free survival (PFS) compared with interferon (IFN)-α. We assessed between-treatment differences in overall benefit using a quality-adjusted Time Without Symptoms of disease progression or Toxicity of treatment (TWiST; Gelber and Goldhirsch) analysis.\n\nABSTRACT.METHODS:\nIn this analysis, in which only grade 3/4 treatment-related toxicities were included, overall survival was partitioned into three health states: toxicity (time with toxicity after randomisation and before progression), time without symptoms of disease progression or toxicity, and time from progression until death. Between-treatment differences in the mean duration of each state were calculated. A threshold utility analysis was used to assess quality-adjusted TWiST (Q-TWiST) outcomes.\n\nABSTRACT.RESULTS:\nQ-TWiST scores showed that quality-adjusted survival time was greater with sunitinib than with IFN-α, even though certain grade 3/4 toxicities occurred more frequently with sunitinib. For both treatments, the mean number of days with toxicity was small compared with PFS. This effect was more pronounced with sunitinib in which time spent without progression or toxicity was 151 days greater than with IFN-α.\n\nABSTRACT.CONCLUSION:\nPatients randomised to sunitinib had longer clinical benefit, defined as Q-TWiST scores, than patients randomised to IFN-α.\n\nBODY:\nTargeted treatment with antiangiogenic therapy has become standard of care for the treatment of most patients with metastatic clear cell renal cell carcinoma (mRCC) based upon improvements in progression-free survival (PFS) in several randomised phase III clinical trials (Escudier et al, 2007a, 2007b; Motzer et al, 2007; Sternberg et al, 2010). With more than one choice of therapy available for patients with mRCC, clinical decision making is more complex, and data on other outcomes in addition to efficacy, such as quality of life and safety, have become increasingly important. Most often, treatments are compared by a head-to-head analysis of each trial endpoint separately. For purposes of a primary analysis, this approach is reasonable. However, this approach may not be ideal if there are important tradeoffs between endpoints such as an increased time with side effects of treatment but longer time to progression in one arm compared with the other. One method that allows integration of both the quality and quantity of survival time is the Time Without Symptoms of disease progression or Toxicity of treatment or TWiST analysis, and its extension the quality-adjusted TWiST or Q-TWiST (Gelber and Goldhirsch, 1986; Goldhirsch et al, 1989). The primary hypothesis of these methods is that patients with no disease symptoms or treatment toxicity have better health-related quality of life than those who have disease symptoms and toxicity. Q-TWiST was first used to evaluate adjuvant therapy for breast cancer (Gelber et al, 1991), and has since been widely applied to other settings and cancers (Gelber et al, 1996; Rosendahl et al, 1999; Sherrill et al, 2008; Marcus et al, 2010; Zbrozek et al, 2010). In this paper, we report the results of a Q-TWiST analysis from a phase III randomised clinical trial comparing the oral antiangiogenic compound sunitinib (SUTENT; Pfizer, Inc; New York, NY, USA), with interferon-α (IFN-α) as first-line treatment for patients with mRCC (Motzer et al, 2007; Motzer et al, 2009). In this trial, sunitinib showed superior PFS compared with IFN-α (median PFS 11 vs 5 months, P<0.001); in addition, median overall survival with sunitinib was more than 2 years (26.4 months). In general, more adverse events of all grades were reported in the sunitinib arm than in the IFN-α arm, although the proportion of patients experiencing grade 3 or 4 toxicities was relatively low for both treatment groups. The Q-TWiST analysis was used to simultaneously compare the two treatments in terms of PFS, overall survival, and grade 3 or 4 toxicities.\n\nBODY.MATERIALS AND METHODS.PATIENTS AND STUDY DESIGN:\nThe design and main results of the randomised phase III clinical trial have been reported previously (Motzer et al, 2007; Motzer et al, 2009). In this trial, 750 patients with mRCC were randomised in a 1 : 1 ratio to receive either sunitinib or IFN-α. Key patient eligibility criteria included no previous systemic therapy for RCC, measurable disease, an Eastern Cooperative Oncology Group performance status of 0 or 1, as well as adequate hepatic, renal, and cardiac function. All patients provided signed informed consent. The primary endpoint was PFS. Sunitinib was administered orally at an initial dose of 50 mg per day for 4 weeks, followed by 2 weeks off treatment (Schedule 4/2). IFN-α was administered as a subcutaneous injection on three nonconsecutive days per week, starting at 3 million units (MU) for the first week, 6 MU for the second week, and 9 MU thereafter.\n\nBODY.MATERIALS AND METHODS.STATISTICAL METHODS AND ANALYSIS:\nThe Q-TWiST analysis considered three health states, TOX, TWiST, and REL, and the duration of each state was calculated for every patient. The TOX state comprised the total number of days after randomisation and before progression spent with toxicity, regardless of when the toxicity started or whether there were gaps between toxicities. All grade 3 or 4 toxicities attributable to the study drugs were included in the analysis, apart from those starting after progression. The model included only the more severe toxicities because they were the events considered most likely to have more effect on a patient's quality of life. The type, date of onset, and date of resolution of each toxicity were recorded prospectively as a part of the standard procedure in conducting a randomised phase III trial. Time spent with toxicities unresolved by progression was capped at the date of progression. There were 44 sunitinib and 34 IFN-α patients with unresolved toxicities at the time of progression. It is possible for a patient to have more than one type of toxicity during a period of time. Care was taken so that these overlapping toxicity intervals were not double counted. That is, if a patient had a toxicity that lasted from day 1 to day 5 and another toxicity that lasted from day 1 to day 10, the number of days spent with toxicity was 10 days. The TWiST state was defined as PFS time minus time with toxicities. Progression-free survival time was defined from randomisation to the last date of follow-up or progression. Patients without progression were censored at their last date of follow-up (median follow-up was similar in each treatment arm, being 853 days for sunitinib and 863 days for IFN-α). The duration of the relapse or REL state was defined as overall survival time minus PFS time, or the period of time from progression to death. Patients alive at the end of the study were censored for the overall survival endpoint. The mean time spent in each of the three health states was calculated for each treatment arm separately, and a 95% confidence interval for the difference by treatment was calculated using the nonparametric bootstrap method. Progression-free and overall survival curves were generated using Kaplan–Meier methods, which account for differential follow-up. These curves, along with a curve for time on toxicity, were overlaid on a single graph generated separately for each treatment. We used a threshold utility analysis to assess quality-adjusted or Q-TWiST outcomes, in which the TOX and REL health states are each weighted by utility scores or weights. These weights are represented by μTOX and μREL in the equation below: The utility weights reflect the relative value for the TOX and REL states and range from zero to one, with values closer to one discounting fewer days than those closer to zero for each state. A value of one represents a time period that is denoted by patients as a time of perfect health. Conversely, a value of zero represents a time period that is akin to death. Q-TWiST scores were calculated for a combination of utility weights increasing from zero to one by increments of 0.25.\n\nBODY.RESULTS:\nThere were more reported occurrences of most general adverse events of all grades in the sunitinib arm than in the IFN-α arm (Motzer et al, 2007). Figures 1A and B show survival times partitioned into the three health states over the follow-up period separately for sunitinib and IFN-α. In each graph, the overall survival curve (blue) is partitioned by the Kaplan–Meier curves for PFS (red) and time with treatment toxicity (green). The area between the Kaplan–Meier curves gives the average time spent in each of the three health states. The mean number of days spent with grade 3 or 4 toxicity (i.e., TOX) was 27 days higher among patients in the sunitinib arm than in patients from the IFN-α arm (95% CI: 18, 37; Table 1). However, the mean time spent without symptoms of disease progression or toxicity of treatment (i.e., TWiST) was 151 days higher in the sunitinib than in the IFN-α arm (95% CI: 118, 180; Table 1), and the mean time spent in relapse (i.e., REL) was 96 days lower among patients randomised to sunitinib (95% CI: −126, −56; Table 1). Results from a threshold utility analysis where the TOX and REL health states were weighted from 0 to 1 are provided in Table 2. The difference in Q-TWiST ranged from a maximum of 177 ((95% CI: 146, 212; Table 2) weight for TOX=1 and weight for REL=0) to a minimum of 56 ((95% CI: 16, 102; Table 2) weight for TOX=0 and weight for REL=1). The first scenario reflects the case when a patient's quality of life before progression is unaffected by toxicity, but quality of life after progression is severely affected. The second scenario is a reverse of this, when a patient's quality of life before progression is severely affected, but quality of life after progression is unaffected.\n\nBODY.DISCUSSION AND CONCLUSIONS:\nResults from this phase III trial of sunitinib vs IFN-α showed that sunitinib was superior to IFN-α, based on a longer duration of median PFS; overall survival was also longer with sunitinib than with IFN-α, although the difference did not reach statistical significance (Motzer et al, 2009). The results of exploratory analyses were consistent with the hypothesis that the overall survival endpoint was confounded by crossover treatment and use of alternate anticancer drugs after discontinuation. A total of 25 patients from the IFN-α group crossed over to receive sunitinib on study and one-third (117/359=33%) of the patients from the IFN-α group received post-study treatment with sunitinib. The rate of adverse events was low in both treatment groups, but occurred in more patients treated with sunitinib than with IFN-α. This is likely to be attributable to the much longer average duration of therapy with sunitinib than with IFN-α. Quality-of-life scores as measured by the FACT-G and FKSI questionnaires were higher among patients randomised to sunitinib than among those randomised to IFN-α, with minimal regional variation (Cella et al, 2008; Cella et al, 2010). In this Q-TWiST analysis, we integrated efficacy and safety endpoints, and found that patients on the sunitinib arm spent on average 27 more days with grade 3 or 4 treatment-related toxicity than patients on the IFN-α arm. For both treatment arms, the number of days during which a patient experienced toxicity was low compared with the time during which the average patient remained progression-free. This effect was more pronounced for the sunitinib arm; time spent without progression or toxicity was 151 days greater in the sunitinib than in the IFN-α arm. The Q-TWiST analysis provides a way by which we can compare PFS, overall survival, and time spent without toxicity in the two treatment arms in one metric. Patients can value time spent in relapse and time spent undergoing active therapy differently. For the Q-TWiST analysis, we assigned a range of utility weights to reflect 25 such scenarios. A utility weight of one discounts zero days from a health state, while a utility weight of zero discounts all days from a health state. In our analysis, all utility combinations resulted in positive Q-TWiST treatment differences (sunitinib Q-TWiST–IFN-α Q-TWiST). The difference in scores ranged from 177 to 56 days. Interpreted another way, sunitinib had higher quality-adjusted survival times than IFN-α across the entire range of utility combinations. Revicki et al (2006) recommended that differences in Q-TWiST equal to at least 10% of overall survival should be considered clinically meaningful. Based on the median overall survival of 26.4 months or ∼803 days in the sunitinib arm, a 10% or greater difference corresponds to 80 days or greater. In Table 2, Q-TWiST differences in all scenarios apart from five can be deemed as clinically important. Four of the five smallest differences occurred when time from progression to death or end of study was not discounted (REL weight=1), but time before progression was discounted (TOX weight <1). In this Q-TWiST analysis, we used overall utility weights to reflect the average value patients place on time spent in relapse and time spent experiencing toxicity. However, it is possible that there is heterogeneity in how a specific patient would value time spent with toxicity and time spent from progression to death or end of study. Another limitation of our analysis is that patients progressing on IFN-α were given the choice to crossover to the sunitinib arm. This resulted in 25 patients switching to sunitinib during the study and could affect estimation of the REL health state for the IFN-α arm. Q-TWiST methodology can provide useful advice on treatment choice when PFS differences are significant but overall survival differences are not, as is the case with the sunitinib vs IFN-α trial analysed herein. Applying Q-TWiST methodology to examine progression, overall survival, and toxicity as a single metric showed that sunitinib has a greater quality-adjusted survival time than IFN-α. For sunitinib patients, the greater amount of time spent with a toxicity is offset by far longer PFS. These results support the conclusion that sunitinib offers improved clinical and quality-of-life outcomes compared with IFN-α for mRCC patients.\n\n**Question:** Compared to Interferon (IFN)-α what was the result of Sunitinib on Overall survival?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
525
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Q-TWiST analysis to estimate overall benefit for patients with metastatic renal cell carcinoma treated in a phase III trial of sunitinib \n\n ABSTRACT.BACKGROUND:\nIn a randomised phase III trial of treatment-naive patients with metastatic renal cell carcinoma, sunitinib showed significant improvement in progression-free survival (PFS) compared with interferon (IFN)-α. We assessed between-treatment differences in overall benefit using a quality-adjusted Time Without Symptoms of disease progression or Toxicity of treatment (TWiST; Gelber and Goldhirsch) analysis.\n\nABSTRACT.METHODS:\nIn this analysis, in which only grade 3/4 treatment-related toxicities were included, overall survival was partitioned into three health states: toxicity (time with toxicity after randomisation and before progression), time without symptoms of disease progression or toxicity, and time from progression until death. Between-treatment differences in the mean duration of each state were calculated. A threshold utility analysis was used to assess quality-adjusted TWiST (Q-TWiST) outcomes.\n\nABSTRACT.RESULTS:\nQ-TWiST scores showed that quality-adjusted survival time was greater with sunitinib than with IFN-α, even though certain grade 3/4 toxicities occurred more frequently with sunitinib. For both treatments, the mean number of days with toxicity was small compared with PFS. This effect was more pronounced with sunitinib in which time spent without progression or toxicity was 151 days greater than with IFN-α.\n\nABSTRACT.CONCLUSION:\nPatients randomised to sunitinib had longer clinical benefit, defined as Q-TWiST scores, than patients randomised to IFN-α.\n\nBODY:\nTargeted treatment with antiangiogenic therapy has become standard of care for the treatment of most patients with metastatic clear cell renal cell carcinoma (mRCC) based upon improvements in progression-free survival (PFS) in several randomised phase III clinical trials (Escudier et al, 2007a, 2007b; Motzer et al, 2007; Sternberg et al, 2010). With more than one choice of therapy available for patients with mRCC, clinical decision making is more complex, and data on other outcomes in addition to efficacy, such as quality of life and safety, have become increasingly important. Most often, treatments are compared by a head-to-head analysis of each trial endpoint separately. For purposes of a primary analysis, this approach is reasonable. However, this approach may not be ideal if there are important tradeoffs between endpoints such as an increased time with side effects of treatment but longer time to progression in one arm compared with the other. One method that allows integration of both the quality and quantity of survival time is the Time Without Symptoms of disease progression or Toxicity of treatment or TWiST analysis, and its extension the quality-adjusted TWiST or Q-TWiST (Gelber and Goldhirsch, 1986; Goldhirsch et al, 1989). The primary hypothesis of these methods is that patients with no disease symptoms or treatment toxicity have better health-related quality of life than those who have disease symptoms and toxicity. Q-TWiST was first used to evaluate adjuvant therapy for breast cancer (Gelber et al, 1991), and has since been widely applied to other settings and cancers (Gelber et al, 1996; Rosendahl et al, 1999; Sherrill et al, 2008; Marcus et al, 2010; Zbrozek et al, 2010). In this paper, we report the results of a Q-TWiST analysis from a phase III randomised clinical trial comparing the oral antiangiogenic compound sunitinib (SUTENT; Pfizer, Inc; New York, NY, USA), with interferon-α (IFN-α) as first-line treatment for patients with mRCC (Motzer et al, 2007; Motzer et al, 2009). In this trial, sunitinib showed superior PFS compared with IFN-α (median PFS 11 vs 5 months, P<0.001); in addition, median overall survival with sunitinib was more than 2 years (26.4 months). In general, more adverse events of all grades were reported in the sunitinib arm than in the IFN-α arm, although the proportion of patients experiencing grade 3 or 4 toxicities was relatively low for both treatment groups. The Q-TWiST analysis was used to simultaneously compare the two treatments in terms of PFS, overall survival, and grade 3 or 4 toxicities.\n\nBODY.MATERIALS AND METHODS.PATIENTS AND STUDY DESIGN:\nThe design and main results of the randomised phase III clinical trial have been reported previously (Motzer et al, 2007; Motzer et al, 2009). In this trial, 750 patients with mRCC were randomised in a 1 : 1 ratio to receive either sunitinib or IFN-α. Key patient eligibility criteria included no previous systemic therapy for RCC, measurable disease, an Eastern Cooperative Oncology Group performance status of 0 or 1, as well as adequate hepatic, renal, and cardiac function. All patients provided signed informed consent. The primary endpoint was PFS. Sunitinib was administered orally at an initial dose of 50 mg per day for 4 weeks, followed by 2 weeks off treatment (Schedule 4/2). IFN-α was administered as a subcutaneous injection on three nonconsecutive days per week, starting at 3 million units (MU) for the first week, 6 MU for the second week, and 9 MU thereafter.\n\nBODY.MATERIALS AND METHODS.STATISTICAL METHODS AND ANALYSIS:\nThe Q-TWiST analysis considered three health states, TOX, TWiST, and REL, and the duration of each state was calculated for every patient. The TOX state comprised the total number of days after randomisation and before progression spent with toxicity, regardless of when the toxicity started or whether there were gaps between toxicities. All grade 3 or 4 toxicities attributable to the study drugs were included in the analysis, apart from those starting after progression. The model included only the more severe toxicities because they were the events considered most likely to have more effect on a patient's quality of life. The type, date of onset, and date of resolution of each toxicity were recorded prospectively as a part of the standard procedure in conducting a randomised phase III trial. Time spent with toxicities unresolved by progression was capped at the date of progression. There were 44 sunitinib and 34 IFN-α patients with unresolved toxicities at the time of progression. It is possible for a patient to have more than one type of toxicity during a period of time. Care was taken so that these overlapping toxicity intervals were not double counted. That is, if a patient had a toxicity that lasted from day 1 to day 5 and another toxicity that lasted from day 1 to day 10, the number of days spent with toxicity was 10 days. The TWiST state was defined as PFS time minus time with toxicities. Progression-free survival time was defined from randomisation to the last date of follow-up or progression. Patients without progression were censored at their last date of follow-up (median follow-up was similar in each treatment arm, being 853 days for sunitinib and 863 days for IFN-α). The duration of the relapse or REL state was defined as overall survival time minus PFS time, or the period of time from progression to death. Patients alive at the end of the study were censored for the overall survival endpoint. The mean time spent in each of the three health states was calculated for each treatment arm separately, and a 95% confidence interval for the difference by treatment was calculated using the nonparametric bootstrap method. Progression-free and overall survival curves were generated using Kaplan–Meier methods, which account for differential follow-up. These curves, along with a curve for time on toxicity, were overlaid on a single graph generated separately for each treatment. We used a threshold utility analysis to assess quality-adjusted or Q-TWiST outcomes, in which the TOX and REL health states are each weighted by utility scores or weights. These weights are represented by μTOX and μREL in the equation below: The utility weights reflect the relative value for the TOX and REL states and range from zero to one, with values closer to one discounting fewer days than those closer to zero for each state. A value of one represents a time period that is denoted by patients as a time of perfect health. Conversely, a value of zero represents a time period that is akin to death. Q-TWiST scores were calculated for a combination of utility weights increasing from zero to one by increments of 0.25.\n\nBODY.RESULTS:\nThere were more reported occurrences of most general adverse events of all grades in the sunitinib arm than in the IFN-α arm (Motzer et al, 2007). Figures 1A and B show survival times partitioned into the three health states over the follow-up period separately for sunitinib and IFN-α. In each graph, the overall survival curve (blue) is partitioned by the Kaplan–Meier curves for PFS (red) and time with treatment toxicity (green). The area between the Kaplan–Meier curves gives the average time spent in each of the three health states. The mean number of days spent with grade 3 or 4 toxicity (i.e., TOX) was 27 days higher among patients in the sunitinib arm than in patients from the IFN-α arm (95% CI: 18, 37; Table 1). However, the mean time spent without symptoms of disease progression or toxicity of treatment (i.e., TWiST) was 151 days higher in the sunitinib than in the IFN-α arm (95% CI: 118, 180; Table 1), and the mean time spent in relapse (i.e., REL) was 96 days lower among patients randomised to sunitinib (95% CI: −126, −56; Table 1). Results from a threshold utility analysis where the TOX and REL health states were weighted from 0 to 1 are provided in Table 2. The difference in Q-TWiST ranged from a maximum of 177 ((95% CI: 146, 212; Table 2) weight for TOX=1 and weight for REL=0) to a minimum of 56 ((95% CI: 16, 102; Table 2) weight for TOX=0 and weight for REL=1). The first scenario reflects the case when a patient's quality of life before progression is unaffected by toxicity, but quality of life after progression is severely affected. The second scenario is a reverse of this, when a patient's quality of life before progression is severely affected, but quality of life after progression is unaffected.\n\nBODY.DISCUSSION AND CONCLUSIONS:\nResults from this phase III trial of sunitinib vs IFN-α showed that sunitinib was superior to IFN-α, based on a longer duration of median PFS; overall survival was also longer with sunitinib than with IFN-α, although the difference did not reach statistical significance (Motzer et al, 2009). The results of exploratory analyses were consistent with the hypothesis that the overall survival endpoint was confounded by crossover treatment and use of alternate anticancer drugs after discontinuation. A total of 25 patients from the IFN-α group crossed over to receive sunitinib on study and one-third (117/359=33%) of the patients from the IFN-α group received post-study treatment with sunitinib. The rate of adverse events was low in both treatment groups, but occurred in more patients treated with sunitinib than with IFN-α. This is likely to be attributable to the much longer average duration of therapy with sunitinib than with IFN-α. Quality-of-life scores as measured by the FACT-G and FKSI questionnaires were higher among patients randomised to sunitinib than among those randomised to IFN-α, with minimal regional variation (Cella et al, 2008; Cella et al, 2010). In this Q-TWiST analysis, we integrated efficacy and safety endpoints, and found that patients on the sunitinib arm spent on average 27 more days with grade 3 or 4 treatment-related toxicity than patients on the IFN-α arm. For both treatment arms, the number of days during which a patient experienced toxicity was low compared with the time during which the average patient remained progression-free. This effect was more pronounced for the sunitinib arm; time spent without progression or toxicity was 151 days greater in the sunitinib than in the IFN-α arm. The Q-TWiST analysis provides a way by which we can compare PFS, overall survival, and time spent without toxicity in the two treatment arms in one metric. Patients can value time spent in relapse and time spent undergoing active therapy differently. For the Q-TWiST analysis, we assigned a range of utility weights to reflect 25 such scenarios. A utility weight of one discounts zero days from a health state, while a utility weight of zero discounts all days from a health state. In our analysis, all utility combinations resulted in positive Q-TWiST treatment differences (sunitinib Q-TWiST–IFN-α Q-TWiST). The difference in scores ranged from 177 to 56 days. Interpreted another way, sunitinib had higher quality-adjusted survival times than IFN-α across the entire range of utility combinations. Revicki et al (2006) recommended that differences in Q-TWiST equal to at least 10% of overall survival should be considered clinically meaningful. Based on the median overall survival of 26.4 months or ∼803 days in the sunitinib arm, a 10% or greater difference corresponds to 80 days or greater. In Table 2, Q-TWiST differences in all scenarios apart from five can be deemed as clinically important. Four of the five smallest differences occurred when time from progression to death or end of study was not discounted (REL weight=1), but time before progression was discounted (TOX weight <1). In this Q-TWiST analysis, we used overall utility weights to reflect the average value patients place on time spent in relapse and time spent experiencing toxicity. However, it is possible that there is heterogeneity in how a specific patient would value time spent with toxicity and time spent from progression to death or end of study. Another limitation of our analysis is that patients progressing on IFN-α were given the choice to crossover to the sunitinib arm. This resulted in 25 patients switching to sunitinib during the study and could affect estimation of the REL health state for the IFN-α arm. Q-TWiST methodology can provide useful advice on treatment choice when PFS differences are significant but overall survival differences are not, as is the case with the sunitinib vs IFN-α trial analysed herein. Applying Q-TWiST methodology to examine progression, overall survival, and toxicity as a single metric showed that sunitinib has a greater quality-adjusted survival time than IFN-α. For sunitinib patients, the greater amount of time spent with a toxicity is offset by far longer PFS. These results support the conclusion that sunitinib offers improved clinical and quality-of-life outcomes compared with IFN-α for mRCC patients.\n\n**Question:** Compared to Interferon (IFN)-α what was the result of Sunitinib on Quality-of-life scores?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
527
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Q-TWiST analysis to estimate overall benefit for patients with metastatic renal cell carcinoma treated in a phase III trial of sunitinib \n\n ABSTRACT.BACKGROUND:\nIn a randomised phase III trial of treatment-naive patients with metastatic renal cell carcinoma, sunitinib showed significant improvement in progression-free survival (PFS) compared with interferon (IFN)-α. We assessed between-treatment differences in overall benefit using a quality-adjusted Time Without Symptoms of disease progression or Toxicity of treatment (TWiST; Gelber and Goldhirsch) analysis.\n\nABSTRACT.METHODS:\nIn this analysis, in which only grade 3/4 treatment-related toxicities were included, overall survival was partitioned into three health states: toxicity (time with toxicity after randomisation and before progression), time without symptoms of disease progression or toxicity, and time from progression until death. Between-treatment differences in the mean duration of each state were calculated. A threshold utility analysis was used to assess quality-adjusted TWiST (Q-TWiST) outcomes.\n\nABSTRACT.RESULTS:\nQ-TWiST scores showed that quality-adjusted survival time was greater with sunitinib than with IFN-α, even though certain grade 3/4 toxicities occurred more frequently with sunitinib. For both treatments, the mean number of days with toxicity was small compared with PFS. This effect was more pronounced with sunitinib in which time spent without progression or toxicity was 151 days greater than with IFN-α.\n\nABSTRACT.CONCLUSION:\nPatients randomised to sunitinib had longer clinical benefit, defined as Q-TWiST scores, than patients randomised to IFN-α.\n\nBODY:\nTargeted treatment with antiangiogenic therapy has become standard of care for the treatment of most patients with metastatic clear cell renal cell carcinoma (mRCC) based upon improvements in progression-free survival (PFS) in several randomised phase III clinical trials (Escudier et al, 2007a, 2007b; Motzer et al, 2007; Sternberg et al, 2010). With more than one choice of therapy available for patients with mRCC, clinical decision making is more complex, and data on other outcomes in addition to efficacy, such as quality of life and safety, have become increasingly important. Most often, treatments are compared by a head-to-head analysis of each trial endpoint separately. For purposes of a primary analysis, this approach is reasonable. However, this approach may not be ideal if there are important tradeoffs between endpoints such as an increased time with side effects of treatment but longer time to progression in one arm compared with the other. One method that allows integration of both the quality and quantity of survival time is the Time Without Symptoms of disease progression or Toxicity of treatment or TWiST analysis, and its extension the quality-adjusted TWiST or Q-TWiST (Gelber and Goldhirsch, 1986; Goldhirsch et al, 1989). The primary hypothesis of these methods is that patients with no disease symptoms or treatment toxicity have better health-related quality of life than those who have disease symptoms and toxicity. Q-TWiST was first used to evaluate adjuvant therapy for breast cancer (Gelber et al, 1991), and has since been widely applied to other settings and cancers (Gelber et al, 1996; Rosendahl et al, 1999; Sherrill et al, 2008; Marcus et al, 2010; Zbrozek et al, 2010). In this paper, we report the results of a Q-TWiST analysis from a phase III randomised clinical trial comparing the oral antiangiogenic compound sunitinib (SUTENT; Pfizer, Inc; New York, NY, USA), with interferon-α (IFN-α) as first-line treatment for patients with mRCC (Motzer et al, 2007; Motzer et al, 2009). In this trial, sunitinib showed superior PFS compared with IFN-α (median PFS 11 vs 5 months, P<0.001); in addition, median overall survival with sunitinib was more than 2 years (26.4 months). In general, more adverse events of all grades were reported in the sunitinib arm than in the IFN-α arm, although the proportion of patients experiencing grade 3 or 4 toxicities was relatively low for both treatment groups. The Q-TWiST analysis was used to simultaneously compare the two treatments in terms of PFS, overall survival, and grade 3 or 4 toxicities.\n\nBODY.MATERIALS AND METHODS.PATIENTS AND STUDY DESIGN:\nThe design and main results of the randomised phase III clinical trial have been reported previously (Motzer et al, 2007; Motzer et al, 2009). In this trial, 750 patients with mRCC were randomised in a 1 : 1 ratio to receive either sunitinib or IFN-α. Key patient eligibility criteria included no previous systemic therapy for RCC, measurable disease, an Eastern Cooperative Oncology Group performance status of 0 or 1, as well as adequate hepatic, renal, and cardiac function. All patients provided signed informed consent. The primary endpoint was PFS. Sunitinib was administered orally at an initial dose of 50 mg per day for 4 weeks, followed by 2 weeks off treatment (Schedule 4/2). IFN-α was administered as a subcutaneous injection on three nonconsecutive days per week, starting at 3 million units (MU) for the first week, 6 MU for the second week, and 9 MU thereafter.\n\nBODY.MATERIALS AND METHODS.STATISTICAL METHODS AND ANALYSIS:\nThe Q-TWiST analysis considered three health states, TOX, TWiST, and REL, and the duration of each state was calculated for every patient. The TOX state comprised the total number of days after randomisation and before progression spent with toxicity, regardless of when the toxicity started or whether there were gaps between toxicities. All grade 3 or 4 toxicities attributable to the study drugs were included in the analysis, apart from those starting after progression. The model included only the more severe toxicities because they were the events considered most likely to have more effect on a patient's quality of life. The type, date of onset, and date of resolution of each toxicity were recorded prospectively as a part of the standard procedure in conducting a randomised phase III trial. Time spent with toxicities unresolved by progression was capped at the date of progression. There were 44 sunitinib and 34 IFN-α patients with unresolved toxicities at the time of progression. It is possible for a patient to have more than one type of toxicity during a period of time. Care was taken so that these overlapping toxicity intervals were not double counted. That is, if a patient had a toxicity that lasted from day 1 to day 5 and another toxicity that lasted from day 1 to day 10, the number of days spent with toxicity was 10 days. The TWiST state was defined as PFS time minus time with toxicities. Progression-free survival time was defined from randomisation to the last date of follow-up or progression. Patients without progression were censored at their last date of follow-up (median follow-up was similar in each treatment arm, being 853 days for sunitinib and 863 days for IFN-α). The duration of the relapse or REL state was defined as overall survival time minus PFS time, or the period of time from progression to death. Patients alive at the end of the study were censored for the overall survival endpoint. The mean time spent in each of the three health states was calculated for each treatment arm separately, and a 95% confidence interval for the difference by treatment was calculated using the nonparametric bootstrap method. Progression-free and overall survival curves were generated using Kaplan–Meier methods, which account for differential follow-up. These curves, along with a curve for time on toxicity, were overlaid on a single graph generated separately for each treatment. We used a threshold utility analysis to assess quality-adjusted or Q-TWiST outcomes, in which the TOX and REL health states are each weighted by utility scores or weights. These weights are represented by μTOX and μREL in the equation below: The utility weights reflect the relative value for the TOX and REL states and range from zero to one, with values closer to one discounting fewer days than those closer to zero for each state. A value of one represents a time period that is denoted by patients as a time of perfect health. Conversely, a value of zero represents a time period that is akin to death. Q-TWiST scores were calculated for a combination of utility weights increasing from zero to one by increments of 0.25.\n\nBODY.RESULTS:\nThere were more reported occurrences of most general adverse events of all grades in the sunitinib arm than in the IFN-α arm (Motzer et al, 2007). Figures 1A and B show survival times partitioned into the three health states over the follow-up period separately for sunitinib and IFN-α. In each graph, the overall survival curve (blue) is partitioned by the Kaplan–Meier curves for PFS (red) and time with treatment toxicity (green). The area between the Kaplan–Meier curves gives the average time spent in each of the three health states. The mean number of days spent with grade 3 or 4 toxicity (i.e., TOX) was 27 days higher among patients in the sunitinib arm than in patients from the IFN-α arm (95% CI: 18, 37; Table 1). However, the mean time spent without symptoms of disease progression or toxicity of treatment (i.e., TWiST) was 151 days higher in the sunitinib than in the IFN-α arm (95% CI: 118, 180; Table 1), and the mean time spent in relapse (i.e., REL) was 96 days lower among patients randomised to sunitinib (95% CI: −126, −56; Table 1). Results from a threshold utility analysis where the TOX and REL health states were weighted from 0 to 1 are provided in Table 2. The difference in Q-TWiST ranged from a maximum of 177 ((95% CI: 146, 212; Table 2) weight for TOX=1 and weight for REL=0) to a minimum of 56 ((95% CI: 16, 102; Table 2) weight for TOX=0 and weight for REL=1). The first scenario reflects the case when a patient's quality of life before progression is unaffected by toxicity, but quality of life after progression is severely affected. The second scenario is a reverse of this, when a patient's quality of life before progression is severely affected, but quality of life after progression is unaffected.\n\nBODY.DISCUSSION AND CONCLUSIONS:\nResults from this phase III trial of sunitinib vs IFN-α showed that sunitinib was superior to IFN-α, based on a longer duration of median PFS; overall survival was also longer with sunitinib than with IFN-α, although the difference did not reach statistical significance (Motzer et al, 2009). The results of exploratory analyses were consistent with the hypothesis that the overall survival endpoint was confounded by crossover treatment and use of alternate anticancer drugs after discontinuation. A total of 25 patients from the IFN-α group crossed over to receive sunitinib on study and one-third (117/359=33%) of the patients from the IFN-α group received post-study treatment with sunitinib. The rate of adverse events was low in both treatment groups, but occurred in more patients treated with sunitinib than with IFN-α. This is likely to be attributable to the much longer average duration of therapy with sunitinib than with IFN-α. Quality-of-life scores as measured by the FACT-G and FKSI questionnaires were higher among patients randomised to sunitinib than among those randomised to IFN-α, with minimal regional variation (Cella et al, 2008; Cella et al, 2010). In this Q-TWiST analysis, we integrated efficacy and safety endpoints, and found that patients on the sunitinib arm spent on average 27 more days with grade 3 or 4 treatment-related toxicity than patients on the IFN-α arm. For both treatment arms, the number of days during which a patient experienced toxicity was low compared with the time during which the average patient remained progression-free. This effect was more pronounced for the sunitinib arm; time spent without progression or toxicity was 151 days greater in the sunitinib than in the IFN-α arm. The Q-TWiST analysis provides a way by which we can compare PFS, overall survival, and time spent without toxicity in the two treatment arms in one metric. Patients can value time spent in relapse and time spent undergoing active therapy differently. For the Q-TWiST analysis, we assigned a range of utility weights to reflect 25 such scenarios. A utility weight of one discounts zero days from a health state, while a utility weight of zero discounts all days from a health state. In our analysis, all utility combinations resulted in positive Q-TWiST treatment differences (sunitinib Q-TWiST–IFN-α Q-TWiST). The difference in scores ranged from 177 to 56 days. Interpreted another way, sunitinib had higher quality-adjusted survival times than IFN-α across the entire range of utility combinations. Revicki et al (2006) recommended that differences in Q-TWiST equal to at least 10% of overall survival should be considered clinically meaningful. Based on the median overall survival of 26.4 months or ∼803 days in the sunitinib arm, a 10% or greater difference corresponds to 80 days or greater. In Table 2, Q-TWiST differences in all scenarios apart from five can be deemed as clinically important. Four of the five smallest differences occurred when time from progression to death or end of study was not discounted (REL weight=1), but time before progression was discounted (TOX weight <1). In this Q-TWiST analysis, we used overall utility weights to reflect the average value patients place on time spent in relapse and time spent experiencing toxicity. However, it is possible that there is heterogeneity in how a specific patient would value time spent with toxicity and time spent from progression to death or end of study. Another limitation of our analysis is that patients progressing on IFN-α were given the choice to crossover to the sunitinib arm. This resulted in 25 patients switching to sunitinib during the study and could affect estimation of the REL health state for the IFN-α arm. Q-TWiST methodology can provide useful advice on treatment choice when PFS differences are significant but overall survival differences are not, as is the case with the sunitinib vs IFN-α trial analysed herein. Applying Q-TWiST methodology to examine progression, overall survival, and toxicity as a single metric showed that sunitinib has a greater quality-adjusted survival time than IFN-α. For sunitinib patients, the greater amount of time spent with a toxicity is offset by far longer PFS. These results support the conclusion that sunitinib offers improved clinical and quality-of-life outcomes compared with IFN-α for mRCC patients.\n\n**Question:** Compared to Interferon (IFN)-α what was the result of Sunitinib on Duration of median PFS?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
532
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A Theory-Based Video Messaging Mobile Phone Intervention for Smoking Cessation: Randomized Controlled Trial\n\n ABSTRACT.BACKGROUND:\n Advances in technology allowed the development of a novel smoking cessation program delivered by video messages sent to mobile phones. This social cognitive theory-based intervention (called \"STUB IT\") used observational learning via short video diary messages from role models going through the quitting process to teach behavioral change techniques. \n\nABSTRACT.OBJECTIVE:\n The objective of our study was to assess the effectiveness of a multimedia mobile phone intervention for smoking cessation.\n\nABSTRACT.METHODS:\n A randomized controlled trial was conducted with 6-month follow-up. Participants had to be 16 years of age or over, be current daily smokers, be ready to quit, and have a video message-capable phone. Recruitment targeted younger adults predominantly through radio and online advertising. Registration and data collection were completed online, prompted by text messages. The intervention group received an automated package of video and text messages over 6 months that was tailored to self-selected quit date, role model, and timing of messages. Extra messages were available on demand to beat cravings and address lapses. The control group also set a quit date and received a general health video message sent to their phone every 2 weeks.\n\nABSTRACT.RESULTS:\nThe target sample size was not achieved due to difficulty recruiting young adult quitters. Of the 226 randomized participants, 47% (107/226) were female and 24% (54/226) were Maori (indigenous population of New Zealand). Their mean age was 27 years (SD 8.7), and there was a high level of nicotine addiction. Continuous abstinence at 6 months was 26.4% (29/110) in the intervention group and 27.6% (32/116) in the control group (P = .8). Feedback from participants indicated that the support provided by the video role models was important and appreciated.\n\nABSTRACT.CONCLUSIONS:\n This study was not able to demonstrate a statistically significant effect of the complex video messaging mobile phone intervention compared with simple general health video messages via mobile phone. However, there was sufficient positive feedback about the ease of use of this novel intervention, and the support obtained by observing the role model video messages, to warrant further investigation.\n\nABSTRACT.TRIAL REGISTRATION:\n Australian New Zealand Clinical Trials Registry Number: ACTRN12606000476538; http://www.anzctr.org.au/trial_view.aspx?ID=81688 (Archived by WebCite at http://www.webcitation.org/5umMU4sZi)\n\nBODY.INTRODUCTION :\nWhile smoking prevalence has been declining in many countries [1,2] high prevalence rates are a cause for concern in developing countries [3-5], in disadvantaged or vulnerable populations [6-8], and in young people [8-10]. New Zealand Maori (the indigenous population of New Zealand) have particularly high smoking prevalence rates (40.4% of males and 49.7% of females aged 15-64 years [10]) and new interventions must be appropriate for this population group in New Zealand. Smoking quit rates are low even where intensive behavioral and pharmacological support is available [11], although most smokers who try to quit do so without extra assistance [12]. Providing more options for smoking cessation support is one strategy to try to encourage more quit attempts. Mobile phones have good potential as one option because they tend to be always with people, and messages can be sent directly to quitters wherever they are and at the most appropriate times (eg, for cravings or for usual cues to smoke). There is some evidence of more equitable access to mobile phones than to other communications services in developed countries [13,14] and rapid uptake in developing countries [15,16]. There is also emerging evidence that those with high health needs may use mobile phones more than those without [13,17,18]. Our successful text messaging smoking cessation program [19] was recently implemented as a national government-funded program in New Zealand [20]. In order to use new advances in mobile phone technology to continue to improve uptake and effectiveness, we proposed and developed an updated intervention (\"STUB IT\"). A randomized controlled trial was undertaken between November 2007 and August 2009 to determine whether a video-based smoking cessation intervention delivered via mobile phone was effective at increasing smoking cessation rates compared with a control group over a 6-month period. In this paper we describe this trial, and reflect on challenges faced in recruitment that undermined its capacity to adequately test the effectiveness of the intervention.\n\nBODY.METHODS:\nRecruitment was targeted at young adults (16-25 years) and particularly toward young Maori. The study was advertised extensively via radio, internet, mobile phone (to those who had signed up for such a service), paper-based and online magazines, Maori-specific media of all types, local and national newspapers, and media releases to national media outlets. Advertisements were placed in tertiary education institutions (via campus posters, student magazines, student websites, student health services, and student radio), primary health care services, smoking cessation services, large employer health promotion programs, and posters or leaflets at cafes, bars, and sports grounds. Participants were eligible if they were at least 16 years of age, smoked daily, and wanted to quit. Participants were required to have a mobile phone that was capable of receiving video messages. The video messages were sent as a text message with a universal resource locator (URL) address in the text. Participants highlighted the URL to trigger automatic downloading and playing of the video on the phone (Figure 1; see also Multimedia Appendix 1 and 2 for sample videos). Participants could return to the text message to replay the video if desired. This process does not require extremely high-end technology phones, but was available on most recent mobile phones. The video messages were made as small as possible (<300 kB) to allow the lowest-specification common phone to be able to access them. Due to a partnership with Vodafone New Zealand Ltd (one of only two mobile phone networks in New Zealand at the time), this whole process was free to participants. Figure 1Screenshot of the intervention Potential participants completed an online eligibility check and, if eligible, were advised to read the study information online (or it could be emailed or posted) and reply to a consent text message with the words \"I consent.\" Consenting participants were directed to complete baseline data collection on an online form. On submission of this information, computer randomization allocated participants to an intervention or control group, using stratified minimization for age (25 years and under, over 25 years), ethnicity (Maori, non-Maori), and level of nicotine dependence (time to first cigarette 30 minutes or less, more than 30 minutes). Both groups nominated a quit day (QD) on which they aimed to stop smoking that was between 1 and 3 weeks from randomization. Participants also nominated two time periods (in a 24-hour clock) during which they wished to receive the mobile phone messages. Those in the intervention group were also directed to an online brief description and photograph of the six role models (three of whom were Maori) and asked to select one person from whom they would receive messages (although they were able to change this later if desired). The steps in the development of the intervention have been described in detail elsewhere [21]. In brief, we drew on social cognitive theory [22] to inform the use of role models via short video messages providing observational learning. We hypothesized that this role modeling by \"ordinary\" young people would enhance self-efficacy to quit smoking and thereby increase the chances of a quit attempt being successful [23-25]. The video messages were filmed as video diaries during a quit attempt, with the role models discussing issues they had found difficult and the techniques and coping strategies they used to remain smoke-free. These vignettes were based on the role model's own story (all six role models were ex-smokers), plus theory and evidence-based behavior change techniques usually taught in cessation counseling (such as setting goals, being reminded of reasons for quitting, identifying triggers and cues to smoking, planning to manage or avoid triggers and cues, receiving positive reinforcement, and using social support). The intervention was arranged into a chronological schedule of mobile phone messages that included the role model videos, text messages (short message service; SMS), and other video messages (animations about reasons to stop smoking; and \"truth\" campaign mass media advertisements supplied by the American Centers for Disease Control and Prevention). Table 1 shows the number and type of messages in each phase, along with the duration of each phase. Table 1 Chronological sequence of mobile phone messages Phase Number of messages Timing and duration of phase Format of messages in each phase Countdown to QD a 1/day For 1 week prior to QD Role model videos and texts QD 3/day 1 day (QD) Role model videos and texts Intensive phase 3/day For 4 weeks post-QD Role model videos and texts Maintenance phase 1 every 2 days For 2 weeks after intensive phase SMS b messages, other mixed videos Maintenance continued 1 every 4 days For about 20 weeks until 6 months after randomization SMS messages, other mixed videos \n a Quit day. \n b Short message service, or text messages. Additional features included a website for intervention group participants that allowed them to review video messages they had been sent (and rate them if desired), change their selected time periods, and change (or add to) their selected role model. Intervention group participants could also ask for extra support messages on demand by texting keywords to the study shortcode (four-digit number). Texting \"crave\" and the context (either \"stress,\" \"bored,\" or \"drinking\" – three common triggers for smoking in young adults) would result in the immediate automated sending of an appropriate video or text message on how to beat cravings within that context. Texting \"relapse\" would result in three messages over the next 90 minutes to motivate to keep going with the quit attempt and suggest ways of getting extra support. The control group participants received one video message every 2 weeks with general health messages and reminders about the study for 6 months. The primary outcome for the study was continuous abstinence as defined by the Russell standard [26], which allows up to five cigarettes over 6 months after QD. Other outcomes were 7-day point prevalence abstinence; confidence in ability to quit/stay quit (as a percentage on a scale from 0%, not confident, to 100%, fully confident); number of quit attempts and use of nicotine replacement therapies during the study period; participant satisfaction with aspects of the program (intervention group only); and any motor vehicle accidents that occurred while driving and using a mobile phone during the study period (as possible adverse events). Smoking status was verified on a random sample of 10% of eligible participants prior to randomization. Verification of quitting status was attempted in all participants reporting continuous abstinence at 6 months using salivary cotinine reading on a mailed-out and returned NicAlert (Nymox Pharmaceutical Corporation, Hasbrouck Heights, NJ, USA) test-strip pack. Salivary cotinine has a half-life of 15-40 hours and is able to distinguish smokers from nonsmokers using a cutoff of 10 ng/mL of cotinine (sensitivity 93%, specificity 95%, and a positive predictive value of 95%) [27]. Two staff members independently read the NicAlert test strips. The nature of the intervention ensured the study could only be single blinded – that is, participants were aware of which group they were allocated to. However, most data were collected via web-based forms completed by participants, and researchers involved in data collection, particularly outcome assessment, were blind to allocation. Initial calculations indicated that a target sample size of 1300 participants would detect a relative risk of 1.75 for a control group 6-month quit rate of 8.5% (intervention group quit rate of 15%), with 90% power at P = .05. This included a loss to follow-up of 20%. All statistical analyses were performed using SAS version 9.1.3 (SAS Institute Inc, Cary, NC, USA), all statistical tests were two-tailed, and a 5% significance level was maintained throughout the analyses. The main analyses were based on the intention-to-treat principle as recommended for cessation studies [26], where participants lost to follow-up were considered not to have quit at follow-up. Simple chi-square analyses compared the proportion quit at different stages of follow-up between the intervention groups. \n\nBODY.RESULTS :\nParticipants were recruited into the study between November 2007 and February 2009, and this proved much more difficult than expected. We attempted multiple sequential \"waves\" of recruitment efforts via new and multiple sources. However, each wave did little to change the overall recruitment rate. The study catchment area was also increased sequentially from the Auckland region (population approximately 1.4 million), to the Northern Region of the North Island (population approximately 2.3 million), to the whole of New Zealand (population 4.1 million). Initial incentives of monthly prize draws of new third-generation (3G) phones were deemed insufficient to attract new participants, and reimbursements to all participants for their time and participation were later added. Due to these problems, and the costs involved in recruitment, we decided to close the study to recruitment with 226 randomized participants. Figure 2 shows the numbers of registrants, randomized participants, and those completing follow-up. Due to the nature of online data collection at follow-up points, it was possible for participants to enter some follow-up data but not complete the entire form. The follow-up numbers presented in the figure are based on those providing the primary outcome data (at 6 months) and the main smoking outcomes data (at 4 and 12 weeks). Figure 2Consort flowchart for the randomized controlled study of STUB IT \n Table 2 shows the baseline characteristics of randomized participants. Due to the targeted recruitment strategies, the mean age of participants was 27 years; although there was no upper age limit and the oldest person in the study was 63 years old. The majority of participants were of New Zealand European ethnicity, with nearly 24% (54/226) of participants self-selecting Maori ethnicity. Baseline smoking characteristics were similar in the two groups, with some indication that this was a highly addicted cohort due to Hooked on Nicotine Checklist mean scores of 8 (SD 1.9) out of 10 [28]. Table 2 Baseline characteristics of randomized participants, n (%) a \n Intervention (n = 110) Control (n = 116) Mean (SD) age, years 27.5 (9.5) 26.6 (7.8) Female 58 (52.7) 49 (42.2) Ethnicity New Zealand European 55 (50.0) 63 (54.3) Maori 24 (21.8) 30 (25.9) Pacific 12 (10.9) 5 (4.3) Asian 10 (9.1) 13 (11.2) Other 6 (5.5) 5 (4.3) Missing 3 (2.7) 0 (0) Total income in previous 12 months Less than NZ$30,000 53 (48.2) 51 (44.0) NZ$30,001-60,000 35 (31.8) 40 (34.5) Over NZ$60,000 7 (6.4) 12 (10.3) Don’t wish to answer 15 (13.6) 13 (11.2) How soon after waking do you smoke? Within 5 minutes 26 (23.6) 27 (23.3) 6-30 minutes 45 (40.9) 52 (44.8) 31-60 minutes 21 (19.1) 24 (20.7) After 60 minutes 18 (16.4) 13 (11.2) Have you ever tried to quit smoking but couldn’t? Yes 102 (92.7) 104 (89.7) Do you smoke now because it is really hard to quit? Yes 75 (68.2) 82 (70.7) Have you ever felt addicted to tobacco? Yes 98 (89.1) 107 (92.2) Mean (SD) Hooked on Nicotine Checklist (HONC) score 7.99 (2.11) 8.03 (1.68) Mean (SD) confidence in being able to quit this time % 62.4 (22.0) 66.5 (21.8) \n a Unless otherwise stated. \n Table 3 reports continuous abstinence rates (the primary outcome). Intention-to-treat continuous abstinence at 6 months was 26.4% (29/110) in the intervention group and 27.6% (32/116 in the control group (P = .8). Of the 61 participants reporting continuous abstinence at 6 months, 10 were either noncontactable or stated they had relapsed since the end of the study period (when they had claimed to have quit) and therefore could not undergo verification of quitting status. The remaining 51 were sent NicAlert test-strip packs and were contacted repeatedly to return the strips. Fourteen quitters in the intervention group (48% of 29) returned the strip and seven (24%) were confirmed as nonsmokers. Fifteen quitters in the control group (47% of 32) returned the strip and 11 (31%) were confirmed as nonsmokers. Table 3 Continuous abstinence from quit day to 6 months, n (%) Have you smoked tobacco at all since quit day? Intervention Control P -value a Responders-only analysis .7 Not a single puff or between 1 and 5 cigarettes 29 (38.7) 32 (35.6) More than 5 cigarettes 46 (61.3) 58 (64.4) Missing data 35 26 Intention - to - treat analysis . 8 Not a single puff or between 1 and 5 cigarettes 29 (26.4) 32 (27.6) More than 5 cigarettes or missing data 81 (73.6) 84 (72.4) \n a \n P -value for chi-square test comparing groups. No significant difference was found between the groups in the intention-to-treat point prevalence abstinence (no smoking at all in the past 7 days), which was recorded at three time points and is shown in Table 4. Table 4 Point prevalence abstinence at 4 weeks, 12 weeks, and 6 months, n (%) Have you smoked at all in the past 7 days? Intervention Control P -value a 4 weeks .8 Not a single puff 12 (10.9) 14 (12.1) Yes or missing data 98 (89.1) 102 (87.9) 12 weeks .3 Not a single puff 30 (27.3) 25 (21.6) Yes or missing data 80 (72.7) 91 (78.4) 6 months .99 Not a single puff 25 (22.7) 26 (22.4) Yes or missing data 85 (77.3) 88 (77.6) \n a \n P -value for chi-square test comparing groups. At 6 months those who reported quitting were asked to rate their confidence in being able to stay quit, and those who had relapsed were asked to rate their confidence in being able to quit again. There were no significant differences between the intervention and control group mean scores at any of these points (data not shown). At 6 months all participants were asked how many quit attempts they had made during the study period. In the intervention group 7/73 respondents (9.6%) and in the control group 4/81 (4.9%) (P = .3) stated they did not attempt to quit at all, but the majority of respondents in both groups made multiple quit attempts. In the intervention group 17 of 69 respondents (25%) and in the control group 26 of 68 respondents (38%) (P = .2) had used pharmacological quitting support (nicotine patches, nicotine gum, or nortryptiline) at any stage in the 6-month study period. Participants in the intervention group were asked for their feedback on the program. In general the majority of responders stated they liked the video messages from quitters, and appeared to appreciate the frequency and timing of messages (Table 5, 6). Table 5 Intervention group satisfaction with the program, n=67 (%) a \n Which aspects did you… like? dislike? no comment did not use That I would relate to quitters 46 (69) 3 (4) 13 (19) 5 (7) What quitter has to say 44 (66) 6 (9) 10 (15) 7 (10) Video messages from quitters 43 (64) 6 (9) 9 (13) 9 (13) The timing of messages 41 (61) 15 (22) 10 (15) 1 (1) Receiving lots of messages 39 (58) 20 (30) 6 (9) 2 (3) The website 34 (51) 3 (4) 16 (24) 14 (21) Crave messages 32 (48) 8 (12) 8 (12) 19 (28) Antitobacco industry messages 25 (37) 13 (19) 10 (15) 19 (28) Animations 23 (34) 3 (4) 8 (12) 33 (49) \n a Missing data have been excluded. Table 6 Aspects of the program that aided cessation in the intervention group Which aspects helped you to stop smoking even if you relapsed later? Yes Watching someone like me go through the quitting process 59 (88) Being supported to feel like I could do it 55 (86) Feeling like I belonged/like others were going through same thing 52 (81) Things the people in the video clips said 50 (76) Getting messages at the right times 47 (75) The free stuff 44 (69) It was fun 39 (61) Made me get support from my friends or family 39 (60) The website/other people videos 35 (57) Realizing I had been manipulated by tobacco industry 31 (48) Messages/games/whatever distracting me from cravings 30 (47) Crave messages 29 (45) Free text answers to what they liked most about the program could be divided into three groups: those who reported something about feeling supported (29/54, eg, from the role model, because they felt part of a group, because others were going through it too); those whose comments related to the program (11/54, eg, timing of messages, constant messages, nonintrusiveness, use of technology); and those who said all of it (5/59). When asked what they disliked most about the program, 20/49 said they disliked nothing, six complained of some sort of technical issue, and seven did not feel the content was right or did not relate to the models. Five said there were too many messages, one said the messages reminded them to smoke, and one had the (false) perception they were being charged for messages. The most common suggestions to improve the program were around having more personal (human) contact, individually or via support groups or internet social networking. A report from the intervention program confirmed that 29/110 participants (26.4%) had used the text \"crave\" function and 18/110 (16.4%) the text \"relapse\" function. Equal numbers of participants in each group (n = 4) reported having a motor vehicle accident during the study period where the participant was the driver. In the control group one such accident occurred while the participant was using their mobile phone, one within 5 minutes of receiving a message, and two while the participant was smoking, whereas none of the accidents in the intervention group were reported as being temporally related to mobile phone use or smoking.\n\nBODY.DISCUSSION :\nThis study is the first to have developed and trialed a smoking cessation intervention delivered via video messaging on mobile phones. We found no significant differences in quit rates between the intervention and control groups (with trends in different directions depending on time point and type of analysis). However, the trial was substantially underpowered due to our failure to recruit sufficient participants to reach the desired sample size and the higher than expected self-reported control group quit rate. In fact, quit rates in both groups were high compared to New Zealand's quitline quit rates of 17% (6-month continuous abstinence) and 10% in the 18- to 24-year age group, but similar to those reported in our previous study of a text messaging cessation intervention [19]. Therefore, it is possible that with adequate power, an effect may have been found. The strengths of the study include a study design in accordance with CONSORT guidelines and the strict definitions and analysis of smoking abstinence outcomes. We also used theory on which to base the intervention: this has been shown to be important in technology-based health behavior change [29] and ensures the intentions and drivers in the development of the intervention are clear and replicable. Indeed, participants commented positively on the use of role models as a means of support in their quitting attempts. The obvious limitation of the study is the suboptimal recruitment. There are several potential reasons for this, which present challenges to be addressed in future trials. First, our recruitment efforts were targeted at adolescents (16 years and over) and young adults. We found that, despite indicating their interest in quitting, most young people were not actually ready to commit to a cessation intervention. This has been demonstrated elsewhere in focus groups and surveys of young people [30-32]. The recent updated Cochrane review of smoking cessation interventions for young people [33] commented that many of the included studies were underpowered, with only 5032 participants from 24 studies. Only two of these studies recruited directly from the community as we did. Lipkus and colleagues randomized 402 participants despite approaching nearly 40,000 young people in shopping malls [34], while Patten et al required 42 months to randomize 139 participants [35]. Recruitment to youth smoking cessation services has also been shown to be problematic [36], as has recruitment of youth to other types of research [37]. Second, the costs of messaging and advanced technology may have proved a barrier for some. At the time of recruitment, New Zealand mobile phone data charges (or anything other than SMS and voice calling) were expensive. We spoke to two participants who were wary of being charged (despite being advised the program was free) and there may have been more who did not register for this reason. Also many people were unaware whether their mobile phone could receive video messages. These factors may have dissuaded people even registering their interest and therefore we have no information on their relative importance in our recruitment. However, if poor recruitment was related to a wariness of new multimedia messaging, we feel that this will have been short-lived: in our current trial of a multimedia mobile phone program to prevent adolescent depression we have recruited 1200 participants over 30 school weeks. Thirdly, plans to incentivize participation were hampered by several factors. Monetary incentives are considered to be effective in encouraging participation of adolescents in research [38], so we planned to offer free data or top-ups to participants' mobile phone accounts. After commencing the study this was deemed not technically possible, so instead we instituted monthly prize draws of new 3G mobile phones. However, the ethics committee did not approve promotional material that advertised the prize draws. Nevertheless, when recruitment was found to be falling behind target, we obtained ethics approval to provide participants with vouchers (for a mobile phone, the supermarket, or gasoline) as reimbursements for their time, and recruitment rose in response but was not sufficient to make a large difference. Finally, and somewhat ironically, the text messaging cessation program trialed in our own earlier study [19] may have provided competition with our trial: the tx2quit program went live in New Zealand in June 2008 with national promotion by Quitline, and recruited approximately 4000 participants in the following 12 months [20]. Mobile phones are increasingly being used globally in health services as a means of more frequent and convenient contacts with health providers [39,40], remote monitoring of progress [41,42], and to reduce wastage of scarce health resources [40,43]. There are several aspects of mobile phones that also make them a valuable component of healthy behavior change support, such as being with people in times of need, providing two-way communications for help on demand, allowing proactive reminders of motivations to change behavior, providing social support from people's own networks, and providing a long-term means of support [44]. A Cochrane systematic review of the use of mobile phones in smoking cessation support programs [45] demonstrated short-term effectiveness of mobile phone-only programs and long-term effectiveness of a mobile phone and internet program. This study adds to this body of knowledge by demonstrating the feasibility and participant appreciation of video messages via mobile phones to provide observational learning and support for healthy behavior change. It is also of note that participants were happy to complete research procedures such as consent and data collection by mobile phone. Indeed we achieved higher response rates to text message questions (217/226 or 96% response rate to a question about confidence at QD and 170/226 or 75% response rate to a smoking status question at 12 weeks post-QD) than to our online data collection forms (despite text message reminders to complete them). In conclusion, this trial struggled to recruit participants, in particular young adults who wanted to quit smoking. This may explain the failure to show an effect of the intervention, or it may be that the complex theory-based intervention is no more effective than simple less-frequent video messages from researchers. However, there was sufficient positive feedback about the support obtained by observing the role models in the program to warrant further investigation in this area. Further research should explore the effect of this role model-based mobile phone smoking cessation intervention for older adults – a group that are perhaps more serious about stopping smoking and are becoming higher users of newer mobile phone technology.\n\n**Question:** Compared to set a quit date and received a general health video message sent to their phone every 2 weeks. what was the result of an automated package of video and text messages over 6 months that was tailored to self-selected quit date, role model, and timing of messages. Extra messages were available on demand to beat cravings and address lapses. on Continuous abstinence at 6 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
537
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Antibiotic Prophylaxis Using Third Generation Cephalosporins Can Reduce the Risk of Early Rebleeding in the First Acute Gastroesophageal Variceal Hemorrhage: A Prospective Randomized Study\n\n ABSTRACT:\nBacterial infection may be a critical trigger for variceal bleeding. Antibiotic prophylaxis can prevent rebleeding in patients with acute gastroesophageal variceal bleeding (GEVB). The aim of the study was to compare prophylactic third generation cephalosporins with on-demand antibiotics for the prevention of gastroesophageal variceal rebleeding. In a prospective trial, patients with the first acute GEVB were randomly assigned to receive prophylactic antibiotics (intravenous cefotaxime 2 g q 8 hr for 7 days, prophylactic antibiotics group) or to receive the same antibiotics only when infection became evident (on-demand group). Sixty-two patients in the prophylactic group and 58 patients in the on-demand group were included for analysis. Antibiotic prophylaxis decreased infection (3.2% vs. 15.5%, p=0.026). The actuarial rebleeding rate in the prophylactic group was significantly lower than that in the ondemand group (33.9% vs. 62.1%, p=0.004). The difference of rebleeding rate was mostly due to early rebleeding within 6 weeks (4.8% vs. 20.7%, p=0.012). On multivariate analysis, antibiotic prophylaxis (relative hazard: 0.248, 95% confidence interval (CI): 0.067-0.919, p=0.037) and bacterial infection (relative hazard: 3.901, 95% CI: 1.053-14.448, p=0.042) were two independent determinants of early rebleeding. In conclusion, antibiotic prophylaxis using third generation cephalosporins can prevent bacterial infection and early rebleeding in patients with the first acute GEVB.\n\nBODY.INTRODUCTION:\nGastroesophageal variceal bleeding (GEVB) is the most serious complication of portal hypertension and represents the leading cause of death in patients with liver cirrhosis. The patients who survive the initial episode of GEVB have a risk of recurrent bleeding approaching 80% at 2 yr (1). Failure to control bleeding and early rebleeding are the most important prognostic factors influencing the 6-week outcome of these patients (2). Rebleeding is associated with an increased risk of exsanguinations, development of liver failure, encephalopathy, and sepsis which contribute to mortality (3). Over the past two decades, many new treatment modalities have been introduced to improve the management of variceal bleeding, including endoscopic injection sclerotherapy (EIS) and variceal ligation (EVL), and new vasoactive agents such as terlipressin and somatostatin (4). Among them, EIS has been replaced almost universally by EVL, because EVL eradicates varices and provides a lower variceal rebleeding rate with fewer secondary effects than EIS does (5). However, the rebleeding rate following endoscopic treatment is still high; at around 25-50% (6). It is therefore important to define how to further reduce the rebleeding rate. Bacterial infections are frequently associated with upper gastrointestinal bleeding in cirrhotic patients (7). Bacterial infections are more common in cirrhotic patients with acute GEVB than those admitted to hospital with other forms of decompensation, such as encephalopathy (8). Infection may favour variceal bleeding by increasing sinusoidal pressure and altering hemostasis (9). In fact, endotoxemia secondary to bacterial infection may be the critical trigger for variceal bleeding as it produces a wide series of effects that may predispose the cirrhotic patient to bleeding (10). A recent randomized controlled clinical trial has documented the value of quinolone use in preventing rebleeding (6). Prophylactic quinolone can thus further reduce the rebleeding rate in cirrhotic patients with GEVB. However, the use of prophylactic antibiotics can lead to antibiotic resistance with potentially disastrous consequences. It is necessary to prove the benefit of other antibiotics including third generation cephalosporins in preventing rebleeding in cirrhotic patients with GEVB. Therefore, the aim of this study was to compare prophylactic third generation cephalosporins with on-demand antibiotics for the prevention of gastroesophageal variceal rebleeding.\n\nBODY.MATERIALS AND METHODS.PATIENTS:\nFrom June 2000 to December 2004, all patients with cirrhosis admitted with upper gastrointestinal bleeding via our hospital emergency room underwent endoscopy within 12 hr of admission. Male or female patients aged over 18 yr were eligible for inclusion in the study after fulfilling the following criteria: 1) diagnosis of cirrhosis on the basis of previous liver biopsy or clinical, biochemical, and radiologic findings of hepatic failure and portal hypertension; 2) bleeding from esophageal varices or gastric varices; and 3) no signs of infection at admission. The severity of cirrhosis was classified according to Child-Pugh's score (11). GEVB was diagnosed when the emergency endoscopy showed any of the following signs: 1) active bleeding from esophageal varices or gastric varices; 2) stigmata of recent hemorrhage over varices (adherent blood clots); or 3) when there was no other cause of upper gastrointestinal bleeding but fresh blood was found in the stomach. Possible complications of endoscopic treatment were discussed with the patients and their relatives, and written informed consent was obtained before entry into the trial. Patients were excluded from the study if they met the following criteria. First, the patient had a past history of GEVB, or surgical or endoscopic treatment of gastroesophageal varices. Second, the patient received antibiotics within the last 2 weeks. Third, the patient had a terminal illness of any major organ system, or non hepatic malignancy. Forth, the patient had any other causes of upper gastrointestinal bleeding. The diagnosis of hepatocellular carcinoma (HCC) was based on liver biopsy or two coincidental imaging studies as well as one imaging study associated with alpha fetoprotein (AFP) more than 400 ng/mL (12). The Ethics Committee of Chonnam National University Hospital approved the treatment protocol.\n\nBODY.MATERIALS AND METHODS.RANDOMIZATION:\nRandomization was performed at the time of the therapeutic endoscopy by an investigator after patients met clinical and laboratory entry criteria, lacked exclusions, and gave written informed consent for entry into this study. The allocation of patients to treatment was done by drawing sequentially numbered envelopes, each containing a previously determined, randomly selected assignment based on a table of random numbers. Patients in the prophylactic group received antibiotics treatment after randomization with intravenous cefotaxime 2 gram q 8 hr for 7 days. Patients in the on-demand group received antibiotics only when infection was suspected or established. Antibiotics were changed according to the antibiotic sensitivity profile of cultured microorganisms.\n\nBODY.MATERIALS AND METHODS.INFECTION ASSESSMENT:\nA physical examination, complete blood cell count, chest radiography, urine analysis and culture, blood culture, and ascitic fluid neutrophil count with culture (in patients with ascites) were routinely carried out before randomization. Patients were excluded when the initial bacteriologic examination turned out positive finding. If a new infection was suspected, the same procedures were carried out to assess infection. New infections were suspected when there was fever (>38°C), hypothermia (<36°C), unexplained hemodynamic instability, tachypnea, new onset of chest symptoms, dysuria, deterioration of renal function, bowel habit changes, abdominal pain, abdominal distention, as well as alteration of mental state (6). Respiratory infections were diagnosed by clinical symptoms and signs and positive chest radiography findings. Urinary tract infections were diagnosed by the positive urine culture of ≥105 colonies/mL and associated clinical pictures. The diagnosis of bacteremia was based on positive blood culture and clinical signs or symptoms of infection without other recognized causes. The diagnosis of spontaneous bacterial peritonitis was based on ≥250 neutrophils/μL in ascitic fluid (13). Patients without any identified infection source but with fever >38°C and leukocytosis >11,000/μL with neutrophilia were considered as having possible infections and received on demand antibiotics. In analyzing the incidence of infection and determining the effect of antibiotic prophylaxis, only infectious episodes occurring during the first hospitalization were considered. Therefore, the infection rate was compared by number of events in this period.\n\nBODY.MATERIALS AND METHODS.ENDOSCOPIC TREATMENT PROCEDURES:\nBefore endoscopic treatment, octreotide was used. If active bleeding was found during endoscopy, endoscopic treatment was performed immediately. EVL was performed for esophageal varices, and EIS was performed for gastric varices. Two experienced therapeutic endoscopists performed the diagnostic and therapeutic endoscopic procedures. They had 4 yr' experience of standard endoscopy. They did not participate in the postprocedure care of the patients, which was conducted by other physicians. Endoscopy was performed with a standard upper endoscope (Olympus GIF-XQ240, Olympus Optical Co., Ltd., Tokyo, Japan). After endoscopic treatment, octreotide was used for 5 days. EVL was performed by using a varioligator kit with a single-shot device (Top Corp., Tokyo, Japan) and a flexible overtube or multiband ligators (Wilson-Cook Medical, Winston-Salem, NC, U.S.A.). Size of esophageal varices was graded according to Conn's classification (14). Grade I-visible only during one phase of respiration/performance of Valsalva maneuver. Grade II-visible during both phases of respiration. Grade III-3-6 mm. Grade IV-varices of >6 mm. EVL was performed biweekly for the first 6 weeks until the varices were obliterated or reduced to Grade I size and could not be banded any further. Follow-up endoscopy was performed every 3 months and, if unremarkable, was moved to every 6 months. EIS was performed by using intravariceal injection with the 1:1 mixture of 0.5 mL N-butyl-2-cyanoacrylate (Histoacryl blue, Braun-Melsungen, Germany) and 0.5 mL Lipiodol (Guerbet Laboratory, Aulnay-Sous-Bris, France) in each shot. Rebleeding was defined as one or more of the ongoing bleeding signs including fresh hematemesis, hematochezia, fresh blood aspirated via a nasogastric tube, instability of vital signs, or a reduction of hemoglobin by more than 2 g/dL within 24 hr after initial hemostasis. When rebleeding was suspected, immediate endoscopy was performed. If active bleeding or a fresh blood clot was found at the varices, and if fresh blood was found in the stomach without any other causes of upper gastrointestinal bleeding, rebleeding was confirmed. Bleeding esophageal varices were ligated and bleeding gastric varices were injected with the previously mentioned tissue glue again. Rebleeding within 6 weeks of enrollment after initial control of active bleeding was defined as early rebleeding. Treatment failure was defined as a failure to control active bleeding after two attempts of endoscopic treatment, rebleeding more than twice, or bleeding-related death. Rebleeding index for each patient was calculated by dividing the months of follow-up by the number of rebleeding episodes plus one (6).\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nRebleeding rate as a primary outcome was compared between two groups, and secondary outcomes such as rebleeding index, treatment failure, bleeding related death, infection rate, transfusion requirements, and admission duration, were also compared between two groups. Quantitative data were summarized as mean±standard deviation. The Student t test was utilized to compare the mean values of continuous variables, and the chi-square test with Yate's correction or Fisher exact test was utilized for the comparison of discrete variables. Kaplan-Meier analysis with the log-rank test was used to compare differences of actuarial probability of rebleeding and survival between two groups. Univariate analysis and stepwise multivariate analysis were performed to assess the potential risk factors of early rebleeding using the Cox proportion hazards regression. A p value of less than 0.05 was accepted as statistically significant. The analysis was performed with statistical software package (SPSS 13.0 version for Windows, SPSS, Chicago, IL, U.S.A.). This study hypothesized a reduction of rebleeding rate from 45% to 20% by using prophylactic antibiotics (6). According to the sample size calculation, the study would require 54 patients in each group. The type I error and type II error were set to 0.05 and 0.2, respectively.\n\nBODY.RESULTS:\nDuring the study period, 152 patients with the first acute GEVB were recruited and randomized. Eight patients in the prophylactic group and 7 patients in the on-demand group were excluded from analysis due to occult infections. Six patients in the prophylactic group and 11 patients in the on-demand group were excluded due to their refusal to continue in the study. Therefore, 62 patients in the prophylactic group and 58 patients in the on-demand group were included for analysis. Data regarding the clinical characteristics of the patients at entry are outlined in Table 1. There were no significant differences between two groups with respect to age, gender, etiology, association of HCC, Child-Pugh's score, severity of bleeding, endoscopic characteristics, and period of follow-up (Table 1).\n\nBODY.RESULTS.INFECTION OUTCOMES AND BACTERIOLOGY:\nSummary of the infection sources and bacteriology is outlined in Table 2. The incidence of bacterial infection was significantly lower in patient receiving antibiotic prophylaxis (2/62, 3.2% vs. 9/58, 15.5%, p=0.026). Bacteremia and spontaneous bacterial peritonitis were the most common sources of infection. Enteric bacteria were more frequently identified in patients without antibiotic prophylaxis (0/62, 0% vs. 5/58, 8.6%, p=0.018). There were no significant side effects in antibiotic prophylactic group.\n\nBODY.RESULTS.HEMOSTATIC OUTCOMES:\nSummary of hemostatic outcome data is outlined in Table 3. The rebleeding rate in the prophylactic group was significantly lower than that in the on-demand group (21/62, 33.9% vs. 36/58, 62.1%, p=0.004). The difference of rebleeding was mostly due to early rebleeding within 6 weeks (3/62, 4.8% vs. 12/58, 20.7%, p=0.012). The cumulative total rebleeding rate and early rebleeding rate were also higher in the on-demand group (Fig. 1, 2). However, there was no significant difference in cumulative late rebleeding rate between the two groups (Fig. 3). The rebleeding sources were not different between the two groups (Table 2). The early rebleeding rate in the infected patients was significantly higher than that in the noninfected patients (4/11, 36.4% vs. 11/109, 10.1%, p=0.031). However, there was no difference in total rebleeding rate between the infected and the noninfected (6/11, 54.5% vs. 52/109, 47.7%, p=0.760). The transfusion requirement was significantly larger in on-demand group than that in prophylactic group (p=0.002). There were no differences in rebleeding index, treatment failure, and duration of hospital stay between the two groups. Univariate analysis showed the early rebleeding risk significantly linked to antibiotic prophylaxis and bacterial infection (Table 4). On multivariate analysis, antibiotic prophylaxis (relative hazard: 0.248, 95% confidence interval (CI): 0.067-0.919, p=0.037) and bacterial infection (relative hazard: 3.901, 95% CI: 1.053-14.448, p=0.042) were two independent determinants of early rebleeding (Table 5). Univariate analysis showed the late rebleeding risk significantly linked to alcoholics and presence of HCC (Table 6). On multivariate analysis, alcoholics (relative hazard: 1.968, 95% CI: 1.133-3.502, p=0.016) and association with hepatocellular carcinoma (relative hazard: 1.904, 95% CI: 1.035-3.502, p=0.039) were two independent factors predictive of late rebleeding.\n\nBODY.RESULTS.MORTALITY AND SURVIVAL:\nTwenty patients and 24 patients died in the prophylactic and on-demand group, respectively. Total mortality and 30-day mortality were not different between the two groups (Table 7). The overall rate of survival was similar between the two groups (Fig. 4). Univariate analysis showed that the survival was significantly related to the presence of HCC, Child-Pugh's score, and bacterial infection (Table 8). On multivariate analysis, presence of HCC (relative hazard: 4.134, 95% CI: 2.261-7.560, p<0.001) and Child-Pugh's score (relative hazard: 1.372, 95% CI: 1.173-1.603, p<0.001) were two independent risk factors determining survival.\n\nBODY.DISCUSSION:\nIn cirrhotic patients, there is a predisposition to intestinal bacterial overgrowth, intestinal dysmotility, and increased intestinal permeability, all leading to an increase in bacterial translocation. Bacterial translocation is the probable source of bacterial byproducts such as endotoxin which can cause an increase in portal pressure, impairment of liver function, and worsening of hemostasis (10). Endotoxemia secondary to bacterial infection may indeed be the critical trigger for variceal bleeding (15). Norfloxacin, ciprofloxacin, and ofloxacin have all been used with these indications (6, 16-21). According to our knowledge, previous studies have never used single third generation cephalosporin for antibiotic prophylaxis for gastrointestinal bleeding. For this reason, third generation cephalosporin can also be used to prevent bacterial infection and rebleeding in cirrhotic patients with an antibacterial resistance after a long term quinolone prophylaxis. In our study, third generation cephalosporin was used in order to prevent bacterial infection and rebleeding not only because it is safe and efficacious for enteric Gram-negative bacteria, which are the most common causative organisms in cirrhotic patients with acute gastrointestinal bleeding (16-20), but it also has a benefit of covering Gram positive bacteria. The benefit of third generation cephalosporins for preventing early rebleeding in cirrhotic patients with GEVB by decreasing bacterial infection is proved in our study. The prophylactic effect may not sustain over six weeks. However, the period of the greatest risk of early rebleeding is within the first 48 hr after admission (22). The use of early short term antibiotics is very effective in preventing early rebleeding in cirrhotic patients. In the present study, univariate analysis showed the early rebleeding risk significantly linked to antibiotic prophylaxis and bacterial infection. And, antibiotic prophylaxis (relative hazard: 0.248, 95% CI: 0.067-0.919, p=0.037) and bacterial infection (relative hazard: 3.901, 95% CI: 1.053-14.448, p=0.042) were two independent determinants of early rebleeding by the multivariate analysis. The results suggest that bacterial infection produces a wide series of effects that may predispose the cirrhotic patient to bleeding (10). Therefore, the effective antibiotic prophylaxis should be considered as an essential treatment to prevent early rebleeding. In our study, total rebleeding rates were higher than those of previous prospective randomized study (47% vs. 32%) (6). There are several possible explanations for these differences in the rebleeding rates. First, this may be because of a difference in the clinical characteristics of the study population, including habitual alcohol drinking. In our study, alcoholism was the most common etiology of liver cirrhosis compared with other study. Second, the difference in the rebleeding rates may stem from the type of antibiotics (quinolone vs. cephalosporin). In a recent survey, 26% of spontaneous bacterial peritonitis episodes were caused by quinolone resistant Gram negative bacilli over a two year period, related to long term treatment with quinolone (23). Fortunately, quinolone resistant E coli are still sensitive to third generation cephalosporins (9). In addition, there is a substantially increased likelihood of infections from Gram positive bacteria in patients who received quinolone prophylaxis (24). Finally, there was a difference in total follow-up periods between the studies. The total follow-up periods in our study (mean, 22 months) were longer than those in the other study (mean, 9 months) (6). Patients who survived after an initial episode have a risk of rebleeding rate approaching 80% in 2 yr (1). The risk of late rebleeding (more than 6 weeks after the initial episode) is related to such factors as continued alcohol consumption, variceal size, renal failure, degree of liver failure, and presence of HCC (2). Alcohol consumption continues to influence prognosis even after cirrhosis has developed. Patients with clinically compensated cirrhosis who become abstinent have a 90% chance of surviving for 5 yr. In contrast, if these patients continue to drink, their chance of survival falls to about 70% (25). In our study, continued alcohol drinking and the presence of HCC were the most important determinants of the late rebleeding. All alcoholic patients with variceal rebleeding continued their habitual alcohol consumption. Accordingly, there was a trend of more episodes of rebleeding in cirrhotic patients after longer follow-up period without correction of this risk factor. In order to lower the risk of late rebleeding, abstinence of alcohol and effective treatment of HCC should be encouraged. Although the effect of short-term prophylactic antibiotics in patients with GEVB is proved by the reduction of bacterial infection and early rebleeding rate, these beneficial effects are not reflected in terms of mortality and survival in this study. The lack of influence of antibiotic prophylaxis on mortality is likely because of infection is not an independent predictive factor for survival (6). The small impact of rebleeding on survival is possibly due to the fact that most rebleeding episodes can be further controlled by repeated endoscopic treatments (6). Furthermore, on multivariate analysis, presence of HCC (relative hazard: 4.134, 95% CI: 2.261-7.560, p<0.001) and Child-Pugh's score (relative hazard: 1.372, 95% CI: 1.173-1.603, p<0.001) were the only two independent risk factors determining survival in the present study. Actually, most patients died of hepatic failure or multiorgan failure associated with decreased residual liver function and HCC. However, a recent study reported that in-hospital mortality of patients with cirrhosis and acute variceal bleeding has greatly decreased over the past two decades, in concurrence with an early and combined use of pharmacological and endoscopic therapies and short-term antibiotic prophylaxis (26). The use of prophylactic antibiotics decreased the rate of bacterial infections in randomized controlled trials, and a meta-analysis showed that it was associated with improved survival (27). It warrants larger studies to confirm this benefit of antibiotic prophylaxis on mortality. In conclusion, antibiotic prophylaxis with third generation cephalosporins can prevent bacterial infection and early rebleeding in patients with the first acute GEVB. Although the results are hopeful, larger studies should be performed to confirm this benefit of antibiotics on mortality.\n\n**Question:** Compared to received antibiotics only when infection was suspected or established what was the result of received antibiotics treatment after randomization with intravenous cefotaxime 2 gram q 8 hr for 7 days on The incidence of bacterial infection?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
538
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Antibiotic Prophylaxis Using Third Generation Cephalosporins Can Reduce the Risk of Early Rebleeding in the First Acute Gastroesophageal Variceal Hemorrhage: A Prospective Randomized Study\n\n ABSTRACT:\nBacterial infection may be a critical trigger for variceal bleeding. Antibiotic prophylaxis can prevent rebleeding in patients with acute gastroesophageal variceal bleeding (GEVB). The aim of the study was to compare prophylactic third generation cephalosporins with on-demand antibiotics for the prevention of gastroesophageal variceal rebleeding. In a prospective trial, patients with the first acute GEVB were randomly assigned to receive prophylactic antibiotics (intravenous cefotaxime 2 g q 8 hr for 7 days, prophylactic antibiotics group) or to receive the same antibiotics only when infection became evident (on-demand group). Sixty-two patients in the prophylactic group and 58 patients in the on-demand group were included for analysis. Antibiotic prophylaxis decreased infection (3.2% vs. 15.5%, p=0.026). The actuarial rebleeding rate in the prophylactic group was significantly lower than that in the ondemand group (33.9% vs. 62.1%, p=0.004). The difference of rebleeding rate was mostly due to early rebleeding within 6 weeks (4.8% vs. 20.7%, p=0.012). On multivariate analysis, antibiotic prophylaxis (relative hazard: 0.248, 95% confidence interval (CI): 0.067-0.919, p=0.037) and bacterial infection (relative hazard: 3.901, 95% CI: 1.053-14.448, p=0.042) were two independent determinants of early rebleeding. In conclusion, antibiotic prophylaxis using third generation cephalosporins can prevent bacterial infection and early rebleeding in patients with the first acute GEVB.\n\nBODY.INTRODUCTION:\nGastroesophageal variceal bleeding (GEVB) is the most serious complication of portal hypertension and represents the leading cause of death in patients with liver cirrhosis. The patients who survive the initial episode of GEVB have a risk of recurrent bleeding approaching 80% at 2 yr (1). Failure to control bleeding and early rebleeding are the most important prognostic factors influencing the 6-week outcome of these patients (2). Rebleeding is associated with an increased risk of exsanguinations, development of liver failure, encephalopathy, and sepsis which contribute to mortality (3). Over the past two decades, many new treatment modalities have been introduced to improve the management of variceal bleeding, including endoscopic injection sclerotherapy (EIS) and variceal ligation (EVL), and new vasoactive agents such as terlipressin and somatostatin (4). Among them, EIS has been replaced almost universally by EVL, because EVL eradicates varices and provides a lower variceal rebleeding rate with fewer secondary effects than EIS does (5). However, the rebleeding rate following endoscopic treatment is still high; at around 25-50% (6). It is therefore important to define how to further reduce the rebleeding rate. Bacterial infections are frequently associated with upper gastrointestinal bleeding in cirrhotic patients (7). Bacterial infections are more common in cirrhotic patients with acute GEVB than those admitted to hospital with other forms of decompensation, such as encephalopathy (8). Infection may favour variceal bleeding by increasing sinusoidal pressure and altering hemostasis (9). In fact, endotoxemia secondary to bacterial infection may be the critical trigger for variceal bleeding as it produces a wide series of effects that may predispose the cirrhotic patient to bleeding (10). A recent randomized controlled clinical trial has documented the value of quinolone use in preventing rebleeding (6). Prophylactic quinolone can thus further reduce the rebleeding rate in cirrhotic patients with GEVB. However, the use of prophylactic antibiotics can lead to antibiotic resistance with potentially disastrous consequences. It is necessary to prove the benefit of other antibiotics including third generation cephalosporins in preventing rebleeding in cirrhotic patients with GEVB. Therefore, the aim of this study was to compare prophylactic third generation cephalosporins with on-demand antibiotics for the prevention of gastroesophageal variceal rebleeding.\n\nBODY.MATERIALS AND METHODS.PATIENTS:\nFrom June 2000 to December 2004, all patients with cirrhosis admitted with upper gastrointestinal bleeding via our hospital emergency room underwent endoscopy within 12 hr of admission. Male or female patients aged over 18 yr were eligible for inclusion in the study after fulfilling the following criteria: 1) diagnosis of cirrhosis on the basis of previous liver biopsy or clinical, biochemical, and radiologic findings of hepatic failure and portal hypertension; 2) bleeding from esophageal varices or gastric varices; and 3) no signs of infection at admission. The severity of cirrhosis was classified according to Child-Pugh's score (11). GEVB was diagnosed when the emergency endoscopy showed any of the following signs: 1) active bleeding from esophageal varices or gastric varices; 2) stigmata of recent hemorrhage over varices (adherent blood clots); or 3) when there was no other cause of upper gastrointestinal bleeding but fresh blood was found in the stomach. Possible complications of endoscopic treatment were discussed with the patients and their relatives, and written informed consent was obtained before entry into the trial. Patients were excluded from the study if they met the following criteria. First, the patient had a past history of GEVB, or surgical or endoscopic treatment of gastroesophageal varices. Second, the patient received antibiotics within the last 2 weeks. Third, the patient had a terminal illness of any major organ system, or non hepatic malignancy. Forth, the patient had any other causes of upper gastrointestinal bleeding. The diagnosis of hepatocellular carcinoma (HCC) was based on liver biopsy or two coincidental imaging studies as well as one imaging study associated with alpha fetoprotein (AFP) more than 400 ng/mL (12). The Ethics Committee of Chonnam National University Hospital approved the treatment protocol.\n\nBODY.MATERIALS AND METHODS.RANDOMIZATION:\nRandomization was performed at the time of the therapeutic endoscopy by an investigator after patients met clinical and laboratory entry criteria, lacked exclusions, and gave written informed consent for entry into this study. The allocation of patients to treatment was done by drawing sequentially numbered envelopes, each containing a previously determined, randomly selected assignment based on a table of random numbers. Patients in the prophylactic group received antibiotics treatment after randomization with intravenous cefotaxime 2 gram q 8 hr for 7 days. Patients in the on-demand group received antibiotics only when infection was suspected or established. Antibiotics were changed according to the antibiotic sensitivity profile of cultured microorganisms.\n\nBODY.MATERIALS AND METHODS.INFECTION ASSESSMENT:\nA physical examination, complete blood cell count, chest radiography, urine analysis and culture, blood culture, and ascitic fluid neutrophil count with culture (in patients with ascites) were routinely carried out before randomization. Patients were excluded when the initial bacteriologic examination turned out positive finding. If a new infection was suspected, the same procedures were carried out to assess infection. New infections were suspected when there was fever (>38°C), hypothermia (<36°C), unexplained hemodynamic instability, tachypnea, new onset of chest symptoms, dysuria, deterioration of renal function, bowel habit changes, abdominal pain, abdominal distention, as well as alteration of mental state (6). Respiratory infections were diagnosed by clinical symptoms and signs and positive chest radiography findings. Urinary tract infections were diagnosed by the positive urine culture of ≥105 colonies/mL and associated clinical pictures. The diagnosis of bacteremia was based on positive blood culture and clinical signs or symptoms of infection without other recognized causes. The diagnosis of spontaneous bacterial peritonitis was based on ≥250 neutrophils/μL in ascitic fluid (13). Patients without any identified infection source but with fever >38°C and leukocytosis >11,000/μL with neutrophilia were considered as having possible infections and received on demand antibiotics. In analyzing the incidence of infection and determining the effect of antibiotic prophylaxis, only infectious episodes occurring during the first hospitalization were considered. Therefore, the infection rate was compared by number of events in this period.\n\nBODY.MATERIALS AND METHODS.ENDOSCOPIC TREATMENT PROCEDURES:\nBefore endoscopic treatment, octreotide was used. If active bleeding was found during endoscopy, endoscopic treatment was performed immediately. EVL was performed for esophageal varices, and EIS was performed for gastric varices. Two experienced therapeutic endoscopists performed the diagnostic and therapeutic endoscopic procedures. They had 4 yr' experience of standard endoscopy. They did not participate in the postprocedure care of the patients, which was conducted by other physicians. Endoscopy was performed with a standard upper endoscope (Olympus GIF-XQ240, Olympus Optical Co., Ltd., Tokyo, Japan). After endoscopic treatment, octreotide was used for 5 days. EVL was performed by using a varioligator kit with a single-shot device (Top Corp., Tokyo, Japan) and a flexible overtube or multiband ligators (Wilson-Cook Medical, Winston-Salem, NC, U.S.A.). Size of esophageal varices was graded according to Conn's classification (14). Grade I-visible only during one phase of respiration/performance of Valsalva maneuver. Grade II-visible during both phases of respiration. Grade III-3-6 mm. Grade IV-varices of >6 mm. EVL was performed biweekly for the first 6 weeks until the varices were obliterated or reduced to Grade I size and could not be banded any further. Follow-up endoscopy was performed every 3 months and, if unremarkable, was moved to every 6 months. EIS was performed by using intravariceal injection with the 1:1 mixture of 0.5 mL N-butyl-2-cyanoacrylate (Histoacryl blue, Braun-Melsungen, Germany) and 0.5 mL Lipiodol (Guerbet Laboratory, Aulnay-Sous-Bris, France) in each shot. Rebleeding was defined as one or more of the ongoing bleeding signs including fresh hematemesis, hematochezia, fresh blood aspirated via a nasogastric tube, instability of vital signs, or a reduction of hemoglobin by more than 2 g/dL within 24 hr after initial hemostasis. When rebleeding was suspected, immediate endoscopy was performed. If active bleeding or a fresh blood clot was found at the varices, and if fresh blood was found in the stomach without any other causes of upper gastrointestinal bleeding, rebleeding was confirmed. Bleeding esophageal varices were ligated and bleeding gastric varices were injected with the previously mentioned tissue glue again. Rebleeding within 6 weeks of enrollment after initial control of active bleeding was defined as early rebleeding. Treatment failure was defined as a failure to control active bleeding after two attempts of endoscopic treatment, rebleeding more than twice, or bleeding-related death. Rebleeding index for each patient was calculated by dividing the months of follow-up by the number of rebleeding episodes plus one (6).\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nRebleeding rate as a primary outcome was compared between two groups, and secondary outcomes such as rebleeding index, treatment failure, bleeding related death, infection rate, transfusion requirements, and admission duration, were also compared between two groups. Quantitative data were summarized as mean±standard deviation. The Student t test was utilized to compare the mean values of continuous variables, and the chi-square test with Yate's correction or Fisher exact test was utilized for the comparison of discrete variables. Kaplan-Meier analysis with the log-rank test was used to compare differences of actuarial probability of rebleeding and survival between two groups. Univariate analysis and stepwise multivariate analysis were performed to assess the potential risk factors of early rebleeding using the Cox proportion hazards regression. A p value of less than 0.05 was accepted as statistically significant. The analysis was performed with statistical software package (SPSS 13.0 version for Windows, SPSS, Chicago, IL, U.S.A.). This study hypothesized a reduction of rebleeding rate from 45% to 20% by using prophylactic antibiotics (6). According to the sample size calculation, the study would require 54 patients in each group. The type I error and type II error were set to 0.05 and 0.2, respectively.\n\nBODY.RESULTS:\nDuring the study period, 152 patients with the first acute GEVB were recruited and randomized. Eight patients in the prophylactic group and 7 patients in the on-demand group were excluded from analysis due to occult infections. Six patients in the prophylactic group and 11 patients in the on-demand group were excluded due to their refusal to continue in the study. Therefore, 62 patients in the prophylactic group and 58 patients in the on-demand group were included for analysis. Data regarding the clinical characteristics of the patients at entry are outlined in Table 1. There were no significant differences between two groups with respect to age, gender, etiology, association of HCC, Child-Pugh's score, severity of bleeding, endoscopic characteristics, and period of follow-up (Table 1).\n\nBODY.RESULTS.INFECTION OUTCOMES AND BACTERIOLOGY:\nSummary of the infection sources and bacteriology is outlined in Table 2. The incidence of bacterial infection was significantly lower in patient receiving antibiotic prophylaxis (2/62, 3.2% vs. 9/58, 15.5%, p=0.026). Bacteremia and spontaneous bacterial peritonitis were the most common sources of infection. Enteric bacteria were more frequently identified in patients without antibiotic prophylaxis (0/62, 0% vs. 5/58, 8.6%, p=0.018). There were no significant side effects in antibiotic prophylactic group.\n\nBODY.RESULTS.HEMOSTATIC OUTCOMES:\nSummary of hemostatic outcome data is outlined in Table 3. The rebleeding rate in the prophylactic group was significantly lower than that in the on-demand group (21/62, 33.9% vs. 36/58, 62.1%, p=0.004). The difference of rebleeding was mostly due to early rebleeding within 6 weeks (3/62, 4.8% vs. 12/58, 20.7%, p=0.012). The cumulative total rebleeding rate and early rebleeding rate were also higher in the on-demand group (Fig. 1, 2). However, there was no significant difference in cumulative late rebleeding rate between the two groups (Fig. 3). The rebleeding sources were not different between the two groups (Table 2). The early rebleeding rate in the infected patients was significantly higher than that in the noninfected patients (4/11, 36.4% vs. 11/109, 10.1%, p=0.031). However, there was no difference in total rebleeding rate between the infected and the noninfected (6/11, 54.5% vs. 52/109, 47.7%, p=0.760). The transfusion requirement was significantly larger in on-demand group than that in prophylactic group (p=0.002). There were no differences in rebleeding index, treatment failure, and duration of hospital stay between the two groups. Univariate analysis showed the early rebleeding risk significantly linked to antibiotic prophylaxis and bacterial infection (Table 4). On multivariate analysis, antibiotic prophylaxis (relative hazard: 0.248, 95% confidence interval (CI): 0.067-0.919, p=0.037) and bacterial infection (relative hazard: 3.901, 95% CI: 1.053-14.448, p=0.042) were two independent determinants of early rebleeding (Table 5). Univariate analysis showed the late rebleeding risk significantly linked to alcoholics and presence of HCC (Table 6). On multivariate analysis, alcoholics (relative hazard: 1.968, 95% CI: 1.133-3.502, p=0.016) and association with hepatocellular carcinoma (relative hazard: 1.904, 95% CI: 1.035-3.502, p=0.039) were two independent factors predictive of late rebleeding.\n\nBODY.RESULTS.MORTALITY AND SURVIVAL:\nTwenty patients and 24 patients died in the prophylactic and on-demand group, respectively. Total mortality and 30-day mortality were not different between the two groups (Table 7). The overall rate of survival was similar between the two groups (Fig. 4). Univariate analysis showed that the survival was significantly related to the presence of HCC, Child-Pugh's score, and bacterial infection (Table 8). On multivariate analysis, presence of HCC (relative hazard: 4.134, 95% CI: 2.261-7.560, p<0.001) and Child-Pugh's score (relative hazard: 1.372, 95% CI: 1.173-1.603, p<0.001) were two independent risk factors determining survival.\n\nBODY.DISCUSSION:\nIn cirrhotic patients, there is a predisposition to intestinal bacterial overgrowth, intestinal dysmotility, and increased intestinal permeability, all leading to an increase in bacterial translocation. Bacterial translocation is the probable source of bacterial byproducts such as endotoxin which can cause an increase in portal pressure, impairment of liver function, and worsening of hemostasis (10). Endotoxemia secondary to bacterial infection may indeed be the critical trigger for variceal bleeding (15). Norfloxacin, ciprofloxacin, and ofloxacin have all been used with these indications (6, 16-21). According to our knowledge, previous studies have never used single third generation cephalosporin for antibiotic prophylaxis for gastrointestinal bleeding. For this reason, third generation cephalosporin can also be used to prevent bacterial infection and rebleeding in cirrhotic patients with an antibacterial resistance after a long term quinolone prophylaxis. In our study, third generation cephalosporin was used in order to prevent bacterial infection and rebleeding not only because it is safe and efficacious for enteric Gram-negative bacteria, which are the most common causative organisms in cirrhotic patients with acute gastrointestinal bleeding (16-20), but it also has a benefit of covering Gram positive bacteria. The benefit of third generation cephalosporins for preventing early rebleeding in cirrhotic patients with GEVB by decreasing bacterial infection is proved in our study. The prophylactic effect may not sustain over six weeks. However, the period of the greatest risk of early rebleeding is within the first 48 hr after admission (22). The use of early short term antibiotics is very effective in preventing early rebleeding in cirrhotic patients. In the present study, univariate analysis showed the early rebleeding risk significantly linked to antibiotic prophylaxis and bacterial infection. And, antibiotic prophylaxis (relative hazard: 0.248, 95% CI: 0.067-0.919, p=0.037) and bacterial infection (relative hazard: 3.901, 95% CI: 1.053-14.448, p=0.042) were two independent determinants of early rebleeding by the multivariate analysis. The results suggest that bacterial infection produces a wide series of effects that may predispose the cirrhotic patient to bleeding (10). Therefore, the effective antibiotic prophylaxis should be considered as an essential treatment to prevent early rebleeding. In our study, total rebleeding rates were higher than those of previous prospective randomized study (47% vs. 32%) (6). There are several possible explanations for these differences in the rebleeding rates. First, this may be because of a difference in the clinical characteristics of the study population, including habitual alcohol drinking. In our study, alcoholism was the most common etiology of liver cirrhosis compared with other study. Second, the difference in the rebleeding rates may stem from the type of antibiotics (quinolone vs. cephalosporin). In a recent survey, 26% of spontaneous bacterial peritonitis episodes were caused by quinolone resistant Gram negative bacilli over a two year period, related to long term treatment with quinolone (23). Fortunately, quinolone resistant E coli are still sensitive to third generation cephalosporins (9). In addition, there is a substantially increased likelihood of infections from Gram positive bacteria in patients who received quinolone prophylaxis (24). Finally, there was a difference in total follow-up periods between the studies. The total follow-up periods in our study (mean, 22 months) were longer than those in the other study (mean, 9 months) (6). Patients who survived after an initial episode have a risk of rebleeding rate approaching 80% in 2 yr (1). The risk of late rebleeding (more than 6 weeks after the initial episode) is related to such factors as continued alcohol consumption, variceal size, renal failure, degree of liver failure, and presence of HCC (2). Alcohol consumption continues to influence prognosis even after cirrhosis has developed. Patients with clinically compensated cirrhosis who become abstinent have a 90% chance of surviving for 5 yr. In contrast, if these patients continue to drink, their chance of survival falls to about 70% (25). In our study, continued alcohol drinking and the presence of HCC were the most important determinants of the late rebleeding. All alcoholic patients with variceal rebleeding continued their habitual alcohol consumption. Accordingly, there was a trend of more episodes of rebleeding in cirrhotic patients after longer follow-up period without correction of this risk factor. In order to lower the risk of late rebleeding, abstinence of alcohol and effective treatment of HCC should be encouraged. Although the effect of short-term prophylactic antibiotics in patients with GEVB is proved by the reduction of bacterial infection and early rebleeding rate, these beneficial effects are not reflected in terms of mortality and survival in this study. The lack of influence of antibiotic prophylaxis on mortality is likely because of infection is not an independent predictive factor for survival (6). The small impact of rebleeding on survival is possibly due to the fact that most rebleeding episodes can be further controlled by repeated endoscopic treatments (6). Furthermore, on multivariate analysis, presence of HCC (relative hazard: 4.134, 95% CI: 2.261-7.560, p<0.001) and Child-Pugh's score (relative hazard: 1.372, 95% CI: 1.173-1.603, p<0.001) were the only two independent risk factors determining survival in the present study. Actually, most patients died of hepatic failure or multiorgan failure associated with decreased residual liver function and HCC. However, a recent study reported that in-hospital mortality of patients with cirrhosis and acute variceal bleeding has greatly decreased over the past two decades, in concurrence with an early and combined use of pharmacological and endoscopic therapies and short-term antibiotic prophylaxis (26). The use of prophylactic antibiotics decreased the rate of bacterial infections in randomized controlled trials, and a meta-analysis showed that it was associated with improved survival (27). It warrants larger studies to confirm this benefit of antibiotic prophylaxis on mortality. In conclusion, antibiotic prophylaxis with third generation cephalosporins can prevent bacterial infection and early rebleeding in patients with the first acute GEVB. Although the results are hopeful, larger studies should be performed to confirm this benefit of antibiotics on mortality.\n\n**Question:** Compared to received antibiotics only when infection was suspected or established what was the result of received antibiotics treatment after randomization with intravenous cefotaxime 2 gram q 8 hr for 7 days on Enteric bacteria?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
542
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Fibrin Glue Reduces the Duration of Lymphatic Drainage after Lumpectomy and Level II or III Axillary Lymph Node Dissection for Breast Cancer: A Prospective Randomized Trial\n\n ABSTRACT:\nThis randomized prospective study investigated the effect of fibrin glue use on drainage duration and overall drain output after lumpectomy and axillary dissection in breast cancer patients. A total of 100 patients undergoing breast lumpectomy and axillary dissection were randomized to a fibrin glue group (N=50; glue sprayed onto the axillary dissection site) or a control group (N=50). Outcome measures were drainage duration, overall drain output, and incidence of seroma. Overall, the fibrin glue and control groups were similar in terms of drainage duration, overall drain output, and incidence of seroma. However, subgroup analysis showed that fibrin glue use resulted in a shorter drainage duration (3.5 vs. 4.7 days; p=0.0006) and overall drain output (196 vs. 278 mL; p=0.0255) in patients undergoing level II or III axillary dissection. Fibrin glue use reduced drainage duration and overall drain output in breast cancer patients undergoing a lumpectomy and level II or III axillary dissection.\n\nBODY.INTRODUCTION:\nProlonged lymphatic drainage following axillary dissection in breast cancer remains a significant clinical problem Shortening the drainage duration is associated with shorter in-patient times and hence lower health care costs. Methods applied to reduce lymphatic drainage include compression dressings (1), closed suction drains (2), postoperative shoulder immobilization (3) and fibrin glue application to the axillary dissection site (4-6). The use of fibrin glue may reduce lymphatic drainage by reducing the number of transections of small vessels and lymphatics during axillary lymph node removal (7). However, conflicting clinical trial results exist regarding fibrin glue use in breast cancer surgery. Previous trials involved a wide variety of breast cancer surgery types or degrees of axillary dissection (8). The present randomized study investigated the effect of fibrin glue use on the duration of drainage and overall drain output after breast lumpectomy and axillary dissection for breast cancer.\n\nBODY.MATERIALS AND METHODS.PARTICIPANTS:\nA prospective, randomized study was performed between September 2006 and July 2007 at the Department of Surgery, Seoul National University Hospital. The inclusion criteria (9) were 1) newly diagnosed breast cancer, 2) female gender, 3) elective breast lumpectomy and axillary dissection, 4) informed consent, and 5) being outside of any exclusion criteria. The exclusion criteria (9) were: 1) male gender, 2) a body mass index (kg/m2) <18.5 or >30, 3) current steroid use, 4) systemic anticoagulation or significant coagulation disorder, 5) diabetes, 6) heart disease, 7) history of chest radiation, 8) receiving preoperative chemotherapy, 9) planned immediate breast reconstruction, 10) pregnant or lactating, 11) limited to sentinel node biopsy, or 12) no consent. The study was approved by the Institutional Review Board of the Seoul National University Hospital.\n\nBODY.MATERIALS AND METHODS.TREATMENT PROTOCOL:\nSurgery was performed by two surgeons using principally identical methods. All patients underwent a lumpectomy and level I or greater axillary lymph node dissection, and an effort was made to ligate major vessels. For the patients in the fibrin glue group, fibrin glue (Greenplast kit®, Green Cross, Seoul, Korea) was mixed intraoperatively and diluted twice according to standard instructions, and 2 mL was sprayed onto the axillary dissection site using an aerosol spray applicator (Green Jet V2®, Green Cross, Seoul, Korea), after which manual compression was applied for 2 min according to manufacturer's manual. A Jackson-Pratt closed suction 100 mL/3.2 mm drain was placed into the axillary dissection site for all patients. Compression dressings with surgical bras and pads were applied to all patients for the first 5 days after surgery. Arm movement was restricted to below 90 degrees for 5 days (3). Participants were discharged on postoperative day 5±1 regardless of drain removal, and were routinely followed-up on postoperative days 12±1 and 30±2.\n\nBODY.MATERIALS AND METHODS.OUTCOMES:\nThe primary outcome measure was the duration of drainage. Drainage was assessed daily from the day of surgery, and drains were removed when the drainage volume fell to <30 mL over a 24-hr period. Secondary outcome measures were overall drain output and the incidence of symptomatic seroma formation. A symptomatic seroma was defined as a palpable fluid collection under the wound with symptoms, and was treated using aspiration or open drain insertion. In addition, all wound-related complications were recorded. The wound was examined daily until discharge and during each clinic visit. Wound infections were defined as erythema and/or purulent discharge from the incision site, and were treated using an oral antibiotic for at least 3 days.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE:\nThe primary endpoint was the duration of drainage. In a pilot study with 21 patients, the mean drainage duration was 3.0 days for the case group and 3.2 days for the control group, and the standard deviation of the mean difference between the case and control groups was 0.255. The current study had a power of 90 percent to show whether the drainage duration differed between the fibrin glue and non-fibrin glue groups. This approach assumed that the drainage duration for the two groups was equal, and that a difference of 10 percent or less was clinically irrelevant. This approach required each group to have a sample size of 49 considering a 30% dropout, and α (two-sided) was set at 0.05. The enrollment procedure engaged 50 patients per group.\n\nBODY.MATERIALS AND METHODS.RANDOMIZATION:\nPatients were randomized using a web-based program hosted at the Medical Research Collaborating Center at Seoul National University Hospital. Participant randomization was done after bleeding control and confirmation of a negative margin status, immediately before wound closure. Fig. 1 shows the Consolidated Standards of Reporting Trials (CONSORT) flow chart outlining the progress of participants through the study (10). One hundred participants were randomized, 50 to the fibrin glue group and 50 to the control group (non-fibrin glue). Three fibrin glue patients and two control patients were excluded after randomization due to postoperative bleeding.\n\nBODY.MATERIALS AND METHODS.BLINDING:\nThe surgical team was 'blinded' to treatment assignment until the completion of the surgical dissection and bleeding control just prior to randomization. The personnel involved in recording drainage volumes and complications were also blinded to the treatment assignment.\n\nBODY.MATERIALS AND METHODS.STATISTICAL METHODS:\nData were stored using the Excel (Microsoft Corporation, Redmond, WA, U.S.A.) spread sheet program, and statistical analyses were performed using SPSS software version 12.0 (SPSS, Chicago, IL, U.S.A.) and SAS (SAS Inc., Cray, NC, U.S.A.). Outcome criteria were analyzed on an intension-to-treat basis and all tests were two-tailed. Qualitative data were compared using the Pearson's chi-square-test for more than 30 samples and Fisher's exact test for less than 30 samples. Quantitative data were expressed as means (standard deviations), and were compared using Student t-tests for all group samples and using Mann-Whitney U tests for subgroup samples. Using the general linear model (GLM) procedure, two-way analysis of variance (ANOVA) was used to identify subgroups showing the greatest benefit from use of fibrin glue. A p value <0.05 was considered to indicate a significant difference.\n\nBODY.RESULTS:\nOf the 100 patients enrolled (50 fibrin glue, 50 control), 95 completed the study to the endpoint. All five of those that did not complete the study developed postoperative bleeding on the day of surgery, and four of them were conservatively treated (compression dressing with elastic bandage and open drain insertion) and the other underwent re-exploration for bleeding control and hematoma evacuation. It was not possible to measure the drainage duration or overall drain output in those five patients because the Jackson-Pratt closed suction drain was removed and an open drain inserted. There were no significant differences between the fibrin glue and control groups with respect to age, body mass index (BMI), tumor size, the number of lymph nodes removed, and the number of tumor-involved lymph nodes (Table 1). Level I axillary dissection and node-negative status were found to be associated with a shorter drainage duration (p=0.0001 and p=0.004, respectively) and lower overall drain output (p=0.0001 and p=0.003, respectively), while other factors such as age <60 yr and BMI <25 kg/m2 were not (Table 2).\n\nBODY.RESULTS.DURATION OF DRAINAGE, OVERALL DRAIN OUTPUT AND COMPLICATIONS:\nThe mean drainage duration was 3.3 days for the fibrin glue group and 3.8 days for the control group (p=0.067). The overall drain output was 174 mL for the fibrin glue group and 197 mL for the control group (p=0.309). The differences between groups for these two factors were not found to be statistically significant (Table 2). In addition, the fibrin glue and control groups were found to be similar in terms of the incidence of seroma, postoperative bleeding and wound infection (Table 4).\n\nBODY.RESULTS.SUBGROUP ANALYSIS:\nParticipants were classified according to two categorical factors and the data underwent an analysis of variance. One factor was the use of fibrin glue and the other factor was the axillary dissection level or node status. The variance between groups classified according to the use of fibrin glue and axillary dissection level was significant in terms of drainage duration (p=0.0005) and overall drain output (p=0.0098) (Table 3). The drainage duration was shorter (3.5 vs. 4.7 days; p=0.0006) and the overall drain output was lower (196 vs. 278 mL; p=0.0255) in the fibrin glue group compared with the control group among those undergoing level II or III axillary dissection. These differences were not apparent in those undergoing level I axillary dissection. While the data appeared to suggest there may have been a greater benefit from the use of fibrin glue in patients with a positive node status compared with a negative node status, the differences were not statistically significant (p=0.0818 for drainage duration and p=0.1123 for overall drain output).\n\nBODY.DISCUSSION:\nWhile prolonged axillary lymphatic drainage is not a serious complication after axillary dissection in breast cancer, it remains the main cause of prolonged hospital stays resulting in increased health care costs. Therefore, several approaches have been used to reduce axillary lymphatic drainage, such as the use of fibrin glue and immobilization of the affected arm (11). Although Lindsey et al. (12) reported that fibrin glue reduced lymphatic drainage in rats, conflicting results have been reported in clinical trials. Five studies (4-6, 9, 13) have examined the association between fibrin glue use and drainage duration or overall drain output. Three of those studies (4-6) showed that fibrin glue reduced both axillary lymphatic drainage and drainage duration, while two (9, 13) reported that fibrin glue had no effect on either axillary lymphatic drainage and drainage duration. Moreover, a meta-analysis of several published trials (8) reported that while the data appeared to suggest decreased overall drain output and a shorter drainage duration in patients receiving fibrin sealant, the findings were not statistically significant. Those studies enrolled almost exclusively mastectomy cases or various cases rarely involving lumpectomy, therefore making it difficult to accurately evaluate axillary lymphatic drainage due to the influence of fluid from the mastectomy site or the heterogeneity of surgery. The present study enrolled patients undergoing only breast lumpectomy and axillary dissection to minimize the influence of fluid from mastectomy or lumpectomy. The analysis of all patients showed that fibrin glue use did not reduce the drainage duration or the overall drain output. However, subgroup analysis showed that the use of fibrin glue reduced both drainage duration and overall drain output in patients undergoing level II or III axillary dissection. Fibrin glue had no such effect in patients undergoing level I axillary dissection. The current study found that in both fibrin glue and control groups, drainage duration and overall drain output increased as the level of axillary dissection or nodal stage increased, and that the benefit from the use of fibrin glue was greater in those undergoing higher level dissections. Moreover, the data suggest that a study using a larger population may find that there is greater benefit from fibrin glue use in node-positive patients compared to node-negative patients. The present results suggest that fibrin glue use will be beneficial under conditions of high drainage, such as axillary dissection extended to two or three levels, higher nodal stage (14), following preoperative chemotherapy, old age (2) and obesity (11). In contrast to previous studies (2, 11), the present study did not find an association between lymphatic drainage and age or obesity. This may have been because in this study the body mass index of participants was restricted ≤30 kg/m2 and the number of participants aged ≥60 yr was only 10. The current study found that the use of fibrin glue did not prevent seroma formation, which is consistent with other reports (4, 6, 8, 9, 13). One explanation is that perhaps fibrin glue had only a short and transient effect on drainage (4). Despite subgroup analysis showing a shorter drainage duration in the fibrin glue group compared to the control group, the groups did not differ in terms of seroma formation or the number of outpatient clinic visits due to wound issues. Thus, it appears that the use of fibrin glue can reduce the length of hospital stay and has no detrimental effect on the complication rate. In conclusion, the present results indicate that fibrin glue use can decrease the drainage duration and overall drain output after breast lumpectomy and axillary dissection in patients undergoing level II or III axillary dissection. Thus, fibrin glue use might be recommended in patients at a risk of high lymphatic drainage in order to reduce the length of hospital stays.\n\n**Question:** Compared to control group what was the result of fibrin glue group (glue sprayed onto the axillary dissection site) on overall drain output?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
549
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Physiological-dose steroid therapy in sepsis [ISRCTN36253388]\n\n ABSTRACT.INTRODUCTION:\nThe aim of the study was to assess the prognostic importance of basal cortisol concentrations and cortisol response to corticotropin, and to determine the effects of physiological dose steroid therapy on mortality in patients with sepsis.\n\nABSTRACT.METHODS:\nBasal cortisol level and corticotropin stimulation test were performed within 24 hours in all patients. One group (20 patients) received standard therapy for sepsis and physiological-dose steroid therapy for 10 days; the other group (20 patients) received only standard therapy for sepsis. Basal cortisol level was measured on the 14th day in patients who recovered. The outcome of sepsis was compared.\n\nABSTRACT.RESULTS:\nOnly Sequential Organ Failure Assessment (SOFA) score was found related to mortality, independent from other factors in multivariate analysis. No significant difference was found between the changes in the percentage of SOFA scores of the steroid therapy group and the standard therapy group in survivors, nor between the groups in basal and peak cortisol levels, cortisol response to corticotropin test and mortality. The mortality rates among patients with occult adrenal insufficiencies were 40% in the steroid therapy group and 55.6% in the standard therapy group.\n\nABSTRACT.DISCUSSION:\nThere was a trend towards a decrease in the mortality rates of the patients with sepsis who received physiological-dose steroid therapy. In the advancing process from sepsis to septic shock, adrenal insufficiency was not frequent as supposed. There was a trend (that did not reach significance) towards a decrease in the mortality rates of the patients with sepsis who received physiological-dose steroid therapy.\n\nBODY.INTRODUCTION:\nThis paper was presented at the 10th European Congress of Clinical Microbiology and Infectious Diseases (28–31 May 2000, Stockholm, Sweden). Sepsis can be defined as a systemic response to infection [1]. The incidence of sepsis worldwide is on the increase. Sepsis and its sequels are the leading causes of death in intensive care units. Mortality rates are higher for patients with pre-existing disease, medical conditions, care in the intensive care unit, and multiple organ failure [2,3]. Despite steady improvements in antibiotic therapy and intensive care management during the past decade, mortality has remained close to 50%. This high mortality rate has continued to stimulate interest in pharmacological agents that might reduce morbidity and mortality [4,5,6,7]. Steroid therapy in patients with sepsis is still controversial. In the 1960s, stress doses of hydrocortisone for the treatment of sepsis were investigated, but no advantages could be shown in a double-blind multicenter study [8]. This led to the discontinuation of steroid replacement therapy for sepsis. In the 1970s, therapy with pharmacological doses of glucocorticoids was widely used in patients with sepsis and septic shock. The most compelling evidence in favor of corticosteroid treatment was reported by Schumer [9] in his prospective randomized study of steroid administration to patients with septic shock. These data indicate that methylprednisolone (30 mg/kg) or dexamethasone (3 mg/kg) reduced the mortality rate from 38.4% to 10.5%. However, later in the mid-1980s, pharmacological doses of glucocorticoids for the treatment of sepsis were investigated extensively until several clinical trials gave negative results [10,11,12]. Moreover, there is some evidence that the use of high-dose glucocorticoids in sepsis might be harmful [11]. Many studies have demonstrated that elevated cortisol levels in sepsis and the degree of elevation are related to the severity of illness [13]. Basal and corticotropin (ACTH)-stimulated cortisol levels correlate with the severity of illness, and very high cortisol levels often signify a poor prognosis [14]. In sepsis, the hypothalamic–pituitary–adrenal axis is activated through systemic and neural pathways. Circulating cytokines such as tumor necrosis factor α, interleukin-1 and interleukin-6 activate the hypothalamic–pituitary–adrenal axis independently and, when combined, have synergistic effects [15]. Sepsis can also cause adrenal insufficiency (AI), which is associated with increased mortality [16]. In recent years, several authors have proposed a syndrome of occult AI in septic shock in the presence of normal or even elevated serum cortisol concentrations. This hypothesis is based on many studies investigating the adrenocortical response of patients with septic shock to 0.25 mg of ACTH. Up to 28% of seriously ill patients have been suggested to have occult or unrecognized AI [14]. The prevalence of occult AI (a cortisol increment after a short ACTH test of less than 9 mg/dl) in severe sepsis was estimated at about 50% and the 28-day mortality rate at about 75% [17]. A few studies have indicated that stress doses of hydrocortisone improve hemody-namics in patients with hyperdynamic septic shock, which is unresponsive to conventional therapy [18,19]. However, the use of a physiological dose of steroid in patients with sepsis, severe sepsis, and septic shock has not yet been completely evaluated. We therefore performed a placebo-controlled, randomized, double-blind, single-center study. The aim of this study was to assess basal cortisol concentrations and the cortisol response to ACTH stimulation as well as their prognostic importance, and also to determine the effects of the physiological-dose steroid therapy on mortality in patients with sepsis.\n\nBODY.METHODS.STUDY DESIGN:\nThe study protocol was approved by the Institutional Review Board of Erciyes University and informed consent was obtained from the patients' relatives. This placebo-controlled, randomized, double-blind, single-center study was performed at the Department of Medical Intensive Care Unit and the Department of Infectious Diseases of Erciyes University Medical School during a 2-year period (from May 1997 to April 1999).\n\nBODY.METHODS.PATIENT SELECTION:\nPatients over 17 years old and diagnosed with sepsis were included consecutively in the study. The diagnosis of sepsis was based on the definition of the American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference Report [1]. The severity of illness was classified according to this definition (Table 1). Table 1 Consensus conference group definitions of the stages of sepsis [ 1 ] I. Systemic inflammatory response syndrome (SIRS) Two or more of the following: Temperature of more than 38°C or less than 36°C Heart rate of more than 90/min Respiratory rate of more than 20/min White blood cell count of more than 12,000/mm 3 or less than 4000/mm 3 or more than 10% immature forms (bands) II. Sepsis Systemic inflammatory response syndrome plus a culture-documented infection III. Severe sepsis Sepsis plus organ dysfunction, hypotension, or hypoperfusion (including but not limited to lactic acidosis, oliguria, or acute alteration in mental status) IV. Septic shock Hypotension (despite fluid resuscitation) plus hypoperfusion abnormalities Criteria for exclusion from the study were as follows: already known pre-existing adrenal disease or adrenalectomy, known malignancies, tuberculosis that might have involved the adrenal gland, and administration of steroids within the 3 months before the admission. In addition, patients with burns, hemorrhagic shock or those who had suffered myocar-dial infarction were not included.\n\nBODY.METHODS.TREATMENT PROTOCOL:\nPatients enrolled in the study were treated with standard therapy used in the treatment of sepsis and septic shock. This therapy could include the following: administration of antibiotics, fluid replacement, vasoactive drugs, mechanical ventila-tory support, and any other form of supportive therapy deemed necessary by the primary physicians. Soon after the presumptive diagnosis of severe sepsis, initial laboratory specimens were obtained and within 2 hours the patients were randomized to treatment with prednisolone or placebo groups. The treatment groups were determined by a computer-generated randomization procedure. The steroid therapy group received prednisolone at a physiological dose. Prednisolone was given intravenously at 06.00 (5 mg) and 18.00 (2.5 mg) for 10 days. The standard therapy group received a placebo infusion containing physiological saline solution in an identical manner. Patients and their primary physicians were blinded as to which therapy was administered.\n\nBODY.METHODS.DATA COLLECTION:\nAn ACTH stimulation test was performed with 250 g of tetracosactrin (synacthene; Ciba Geigy, Germany) given intravenously. Blood samples were taken immediately before the test and at 30 and 60 minutes afterwards. After centrifugation, plasma samples were stored at -20°C until assayed. ACTH stimulation test was repeated on the 14th day in the patients who survived. Plasma cortisol concentrations were determined by radioimmunoassay with a commercially available kit (ICN Biomedicals, Inc, Costa Mesa, California, USA). Intra-assay coefficients of variation were for control A 7.0%, control B 5.8%, and control C 5.1%. Inter-assay coefficients of variation were for control A 7.9%, control B 6.5%, control C 6.0%. The cortisol response was defined as the difference between the basal and peak cortisol concentrations. Normal adrenal function was defined as a plasma cortisol level (before or at 30 or 60 minutes after the injection of ACTH) above 20 μg/dl. The cases with peak cortisol levels lower than 20 μg/dl were considered to be AI [13,20,21,22]. Occult AI was defined as an increase in cortisol after a ACTH test of less than 9 g/dl (a cortisol response of no more than 9 μg/dl) [17,23,24]. Community-acquired sepsis had its onset within 72 hours of the patients' admission to the hospital, whereas hospital-acquired sepsis began 72 hours or later after admission. The estimated prognosis of any pre-existing underlying diseases had been classified according to the classification of McCabe and Jackson [25]. Observed initial findings that related to disseminated intravas-cular coagulation, respiratory insufficiency, altered mental status, and renal, cardiac, and liver failure were noted [26,27]. The severity of the illness was assessed with the Acute Physiology and Chronic Health Evaluation II (APACHEII) scoring system [28]. The Sequential Organ Failure Assessment (SOFA) score [29] was added to the study protocol by amendment and retrospectively from the raw data. Infections were diagnosed according to clinical and microbiological criteria. Blood samples for cultures were obtained by the same investigator (O Yildiz) and inoculated into a standard culture medium (BACTEC 9240). Patients were evaluated at enrollment and at the 24th hour, and on the 3rd, 7th, 10th, 14th, 21st, and 28th days, and followed for 1 month after discharge from hospital. Body temperature, respiratory rate, heart rate, blood pressure, the use of vasopressor drugs, urine output, complete blood counts, urinalysis with microscopic examination, erythrocytes sedimentation rate, blood chemistry, prothrombin time, partial thromboplastin time, fibrinogen, fibrin debranch-ing product, blood gases, electrocardiogram, and chest roentgenogram were recorded for each patient individually.\n\nBODY.METHODS.STUDY ENDPOINTS:\nThe primary endpoint of the study was 28-day mortality from all causes. The secondary endpoint consisted of adverse occurrences including possible complications of drug therapy and morbid events such as the progression of initial infection and the development of secondary infection. Secondary infection was defined as the identification of a new site of infection or the emergence of a different organism at the same site, generally requiring a change in antibiotic management. All medications given to the patients, any complications, the duration of hospitalization, the mortality rate, and the causes of death were recorded and compared between the groups. We also compared average values of basal cortisol, peak cortisol and cortisol responses to ACTH between survivors and non-survivors on the first day. Moreover, basal cortisol levels on the 1st and 14th days were compared in patients who recovered in each group.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nStudent's t-test and the Mann-Whitney U multivariate analysis test were used for continuous variables, χ2 and Fisher's χ2 tests were used for proportions, and logistic regression was used for effects of factors on mortality. Values are expressed as means ± SD, odds ratio (OR) and 95% confidence interval (CI) or proportion.\n\nBODY.RESULTS.DESCRIPTION OF STUDY POPULATION:\nForty patients with sepsis, severe sepsis and septic shock were included in this study. The mean age of patients was 56.5 ± 16.4 years in the standard therapy group and 57.8 ± 17.7 years in the steroid therapy group. There was no significant difference in demographic characteristics, the severity of underlying diseases, the APACHE II and SOFA scores, the median time to hospital and median time to death, the acquisition of infection, and the sepsis categories between the groups (Table 2). Table 2 Characteristics of patients Variable Steroid therapy group Standard therapy group Total χ 2 P All patients 20 (50%) 20 (50%) 40 0.417 0.748 Female 7 (43.8%) 9 (56.3%) 16 Male 13 (54.2%) 11 (45.8%) 24 Mean age (years) (mean ± SD) 57.8 ± 17.7 56.5 ± 16.4 57.1 ± 16.9 0.818 Underlying diseases 11 (40.7%) 16 (59.3%) 27 2.849 0.176 Rapidly fatal - - - Ultimately fatal 2 (50%) 2 (50%) 4 Nonfatal 9 (39.1%) 14 (60.9%) 23 No underlying diseases 9 (69.2%) 4 (30.8%) 13 APACHE II (mean ± SD) 15.4 ± 5.5 17.9 ± 8.0 16.6 ± 6.9 0.249 Maximum SOFA (mean ± SD) 7.8 ± 3.9 9.3 ± 4.0 8.5 ± 4.0 0.257 Acquisition of infection 0.143 1.000 Community 16 (51.6%) 15 (48.4%) 31 Hospital 4 (44.4%) 5 (55.6%) 9 Sepsis categories 0.456 0.531 Sepsis 8 (57.1%) 6 (42.9%) 14 Severe sepsis 8 (47.1%) 9 (52.9%) 17 Septic shock 4 (44.4%) 5 (55.6%) 9 Positive cultures 2.345 >0.05 Gram-negative infection 4 (36.4%) 7 (63.6%) 11 Gram-positive infection 10 (66.7%) 5 (33.3%) 15 Candida infection 1 (33.3%) 2 (66.7%) 3 Polymicrobial infection 3 (60.0%) 2 (40.0%) 5 Negative cultures 6 (46.2%) 7 (53.8%) 13 Organ failure 16 (47.1%) 18 (52.9%) 34 DIC 5 (45.5%) 6 (54.5%) 11 Laboratory measurements (mean ± SD) Leukocytes (/mm 3 ) 16,057 ± 12,568 14,781 ± 7,317 0.697 Platelets (/mm 3 ) 183,050 ± 137,983 192,800 ± 127,157 0.324 ESR (mm/h) 69 ± 36 49 ± 33 0.081 CRP (mg/l) 67 ± 16 67 ± 24 0.981 Fibrinogen (mg/dl) 814 ± 394 434 ± 198 0.002 Median stay in hospital (days) 14 (95% CI 11.09–20.08) 13 (95% CI 10.13–16.37) 0.406 Median time to death (days) 5 (95% CI, 2.55–8.20) 5.5 (95% CI, 2.19–13) 0.496 Non-survivors 8 (40%) 12 (60%) 20 1.600 * 0.343 Survivors 12 (60%) 8 (40%) 20 * Fisher's χ 2 test. CI, confidence interval; DIC, disseminated intravascular coagulation; ESR, erythrocytes sedimentation rate. \n\nBODY.RESULTS.THE 28-DAY MORTALITY:\nThere were eight deaths (40%) in the steroid therapy group and 12 (60%) deaths in the standard therapy group (P = 0.343). The mortality rate in the patients with hospital-acquired sepsis was higher than that in the patients with community-acquired sepsis in both groups, but this difference was not statistically significant. The relationship between mortality and age, the presence of an underlying disease, vasopressor and steroid therapy, and APACHE II and SOFA scores was assessed. Age, the presence of an underlying disease, steroid therapy, basal plasma cortisol levels, and cortisol response to ACTH below 9 g/dl were not associated with mortality. Although a univariate analysis found vasopressor therapy (OR 4.64, 95% CI 1.02–21.00), APACHE II (OR 1.18, 95% CI 1.04–1.34) and maximum SOFA (OR 1.63, 95% CI 1.23–2.16) to be effective on mortality, only the SOFA score was found related to mortality, independently of other factors (OR 2.09, 95% CI 1.01–4.30) in a multivariate analysis (Fig. 1 and Table 3). Figure 1Comparison of mortality rates for (a) basal cortisol (μg/dl), (b) cortisol response to ACTH (μg/dl), (c) APACHE II and (d) maximum SOFA scores in both groups. Table 3 Average values (means ± SD) of basal cortisol, peak cortisol, cortisol responses to corticotropin, APACHE II and maximum SOFA scores according to survivors and non-survivors in both groups on the first day Steroid therapy group Standard therapy group Variable Non-survivors ( n = 8) Survivors ( n = 12) P Non-survivors ( n = 12) Survivors ( n = 8) P Basal cortisol level (μg/dl) 44.4 ± 24.8 52.5 ± 30.5 0.536 60.5 ± 34.7 40.3 ± 19.9 0.154 Peak cortisol level (μg/dl) 73.1 ± 35.1 79.2 ± 33.5 0.699 80.7 ± 45.9 60.1 ± 25.7 0.265 Cortisol response (μg/dl) 28.7 ± 22.1 21.2 ± 32.7 0.579 16.3 ± 24.6 18.2 ± 16.3 0.835 APACHE II score 17.6 ± 4.5 13.8 ± 5.8 0.130 21 ± 7.9 13.3 ± 6 0.024 Maximum SOFA score 10.6 ± 3.2 5.9 ± 3.3 0.009 11.4 ± 2.9 6 ± 3.3 0.003 APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, Sequential Organ Failure Assessment. \n\nBODY.RESULTS.VARIATIONS OF TREATMENT EFFECTS ON THE 28-DAY MORTALITY AMONG SUBGROUPS:\nThe mortality rates among patients with a cortisol level over 60 μg/dl were 29% (2 of 7) in the steroid therapy group and 78% (7 of 9) in the standard therapy group (χ2 = 3.874, P = 0.126). Although there was an increase in the level of basal cortisol in patients with sepsis, 14 of 40 patients (35%) had occult AI. The mortality rates in patients with occult AI were 40% (2 of 5) in the steroid therapy group and 55.6% (5 of 9) in the standard therapy group, respectively (χ2 = 0.311, P = 1). Only one patient had both basal and peak cortisol levels lower than 20 g/dl; this patient was in the standard therapy group and died on the 7th day of treatment. This case was accepted as adrenal failure. A comparison of mortality rates for basal cortisol and cortisol responses to ACTH in both groups is shown in Figure 1. The median time to death was 5 days (95% CI 2.55–8.20) in the steroid therapy group and 5.5 days (95% CI 2.19–13) in the standard therapy group (P = 0.496). The reason for death in all patients was attributed to sepsis. The median stay in hospital was 14 days (95% CI 11.09–20.08) in the steroid therapy group and 13 days (95% CI 10.13–16.37) in the standard therapy group (P = 0.406).\n\nBODY.RESULTS.SEPSIS-RELATED ORGAN DYSFUNCTION:\nOrgan dysfunction and failure rates of the patients on admission were 40% and 45% in the steroid therapy and the standard therapy groups, respectively (Table 4). No statistically significant difference was found between the changes in the percentage of SOFA scores of the steroid therapy group (43.1 ± 26.5%) and the standard therapy group (45.4 ± 12.7%) in survivors (P = 0.624). Table 4 Organ failure in groups at baseline Condition Steroid therapy group Standard therapy group Total Renal failure 6 (50%) 6 (50%) 12 Liver failure 9 (52.9%) 8 (47.1%) 17 Altered mental status 10 (38.5%) 16 (61.5%) 26 DIC 5 (45.5%) 6 (54.5%) 11 Cardiac failure 0 (0%) 2 (100%) 2 Respiratory failure 2 (28.6%) 5 (71.4%) 7 DIC, disseminated intravascular coagulation. \n\nBODY.RESULTS.CORTISOL LEVELS AND CORTISOL RESPONSES TO ACTH:\nBasal and peak cortisol levels and cortisol responses to ACTH in the steroid therapy group were not significantly different from those in the standard therapy group (Table 5). The average values of basal cortisol, peak cortisol and cortisol responses to ACTH in survivors and non-survivors in both groups for the first day are shown in Table 3. There was no significant difference between the values for survivors and non-survivors. The mean basal cortisol level was 47.6 ± 26.9 g/dl on the first day in all survivors and 17.2 ± 8.6 g/dl on the 14th day in patients who recovered (P = 0.0003). In the steroid therapy group, the mean basal cortisol level was 52.5 ± 30.5 g/dl on the first day and 17.6 ± 10.3 g/dl on the 14th day (P = 0.003). In the standard therapy group, the mean basal cortisol level was 40.3 ± 19.9 g/dl on the first day and 16.4 ± 4.6 g/dl on the 14th day (P = 0.028) (Table 6). Table 5 Average values (means ± SD) of basal cortisol, peak cortisol and cortisol responses of all patients on the first day Variable Steroid therapy group ( n = 20) Standard therapy group ( n = 20) P Basal cortisol level (μg/dl) 49.3 ± 28 52.4 ± 30.8 0.737 Peak cortisol level (μg/dl) 76.8 ± 33. 3 72.5 ± 39.6 0.712 Cortisol response (μg/dl) 24.2 ± 28.5 17.1 ± 21.2 0.376 Table 6 Comparison of basal cortisol levels (μg/dl; means ± SD) on the 1st and 14th days in recovering patients Steroid therapy group Standard therapy group Total 1st day 14th day P 1st day 14th day P 1st day 14th day P Cortisol 52.5 ± 30.5 17.6 ± 10.3 0.003 40.3 ± 19.9 16.4 ± 4.6 0.028 47.6 ± 26.9 17.2 ± 8.6 0.0003 \n\nBODY.RESULTS.ADVERSE EVENTS:\nThere were no adverse effects due to steroid therapy. Only one patient, in the standard therapy group, had a secondary infection.\n\nBODY.DISCUSSION:\nSepsis is a severe and life-threatening disease. Septic shock is associated with a mortality rate of more than 50%. Despite improvements in the overall management of such patients, including intensive fluid resuscitation, broad-spectrum antibiotic therapy, and life-support devices, mortality rates have not improved during the past decade [4]. In this randomized, double-blind study of the physiological dose of intravenous prednisolone or placebo in the treatment of sepsis, we observed an important difference in mortality rates between both groups. The mortality rates were 40% in the steroid therapy group and 60% in the standard treatment group. There was a trend towards a decrease in the mortality rate of the patients with sepsis who received physiological-dose steroid therapy. However, the differences were not statisti-μ cally significant (P = 0.343). Several factors were suspected to be associated with mortality in sepsis [30,31]. Annane et al. [17] reported that SOFA score, high plasma cortisol levels and weak response of cortisol to ACTH were also associated with mortality. However, we found that only SOFA score was related to mortality, independently of other factors (OR 2.07, 95% CI 1.02–4.22) in the multivariate analysis. Briegel et al. [19] showed that stress doses of hydrocorti-sone reduce the time for the reversal of shock, the number of organ system failures, and the length of mechanical ventilation in patients with septic shock. This finding underlines the fact that an impaired adrenocortical function contributes to vascular hyporesponsiveness in septic shock. In contrast to other studies, our study was performed in patients with sepsis, severe sepsis and septic shock with the use of physiological-dose prednisolone with or without vasopressor support. Because the administration of prednisolone does not affect the circadian adrenocortical patterns and the results of an ACTH stimulation test in adults, we preferred to use this drug [32]. In this study we did not completely evaluate the effect of the steroid on patients with vasopressor-dependent septic shock. However, in a few patients, a physiological dose of prednisolone reduces the time for the reversal of shock as defined by the cessation of vasopressor therapy. It is known that the plasma cortisol level increases in critical illnesses and that basal cortisol levels have a positive correlation with severity of illness and prognosis. However, some investigators [16,23,33,34,35] showed that patients with sepsis and high baseline cortisol levels had a lower cortisol response to the ACTH stimulation test. Our entire study group had higher mean random basal and stimulated cortisol levels than those seen in outpatients. The difference between the basal cortisol levels on the 1st and 14th days in patients who recovered was statistically significant (P < 0.0001) (Table 5). We have also shown that basal cortisol levels correlate with the severity of the illness, and that very high corti-sol levels signify a poor prognosis in the standard therapy group (Fig. 1). Clinically significant AI is unusual in outpatients. The studies of adrenal function in critically ill patients report conflicting incidences of AI ranging from 0% to 28% [35,36,37]. Our results show a low incidence of AI in septic patients. One of our 40 patients had abnormal basal and peak cortisol levels (less than 20 g/dl) and died. This condition was evaluated as AI. In the advancing process from sepsis to septic shock, we concluded that AI was not so frequent as supposed. In recent years, several authors have proposed a syndrome of occult AI in septic shock in the presence of normal or even elevated serum cortisol concentrations. Annane et al. [17] reported that 50% of patients with severe sepsis had occult AI. In the present study, 14 (35%) of the 40 patients had a subnormal cortisol response (occult AI). Five patients in the steroid treatment group had an inadequate cortisol response; two of them died. However, five of nine patients in the standard treatment group who showed inadequate cortisol response died. No statistical differences were observed in mortality rates between the patients who had occult AI (χ2 = 0.311, P = 1) (Fig. 1). In conclusion, a physiological dose of intravenous pred-nisolone had a tendency towards a decrease in mortality in the patients with sepsis, but the difference between the two groups was not significant. Because our study had a small sample size and showed heterogeneity in population in terms of clinical severity, treatment with physiological-dose steroid in sepsis should be evaluated in a larger group of patients.\n\nBODY.KEY MESSAGES:\n· There was an increase in the level of basal cortisol in patients with sepsis. · In the advancing process from sepsis to septic shock, adrenal insufficiency was not so frequent as supposed. · Sepsis can cause occult adrenal insufficiency in the presence of normal or even elevated serum cortisol concentrations. · Physiological-dose prednisolone therapy had a tendency towards a decrease in mortality in the patients with sepsis.\n\nBODY.COMPETING INTERESTS:\nNone declared.\n\nBODY.ABBREVIATIONS:\nAI = adrenal insufficiency; APACHE = Acute Physiology and Chronic Health Evaluation; ACTH = corticotropin; CI = confidence interval; OR = odds ratio; SOFA = Sequential Organ Failure Assessment.\n\n**Question:** Compared to received only standard therapy for sepsis what was the result of received standard therapy for sepsis and physiological-dose steroid therapy for 10 days; on Sequential Organ Failure Assessment (SOFA) score ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
550
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Physiological-dose steroid therapy in sepsis [ISRCTN36253388]\n\n ABSTRACT.INTRODUCTION:\nThe aim of the study was to assess the prognostic importance of basal cortisol concentrations and cortisol response to corticotropin, and to determine the effects of physiological dose steroid therapy on mortality in patients with sepsis.\n\nABSTRACT.METHODS:\nBasal cortisol level and corticotropin stimulation test were performed within 24 hours in all patients. One group (20 patients) received standard therapy for sepsis and physiological-dose steroid therapy for 10 days; the other group (20 patients) received only standard therapy for sepsis. Basal cortisol level was measured on the 14th day in patients who recovered. The outcome of sepsis was compared.\n\nABSTRACT.RESULTS:\nOnly Sequential Organ Failure Assessment (SOFA) score was found related to mortality, independent from other factors in multivariate analysis. No significant difference was found between the changes in the percentage of SOFA scores of the steroid therapy group and the standard therapy group in survivors, nor between the groups in basal and peak cortisol levels, cortisol response to corticotropin test and mortality. The mortality rates among patients with occult adrenal insufficiencies were 40% in the steroid therapy group and 55.6% in the standard therapy group.\n\nABSTRACT.DISCUSSION:\nThere was a trend towards a decrease in the mortality rates of the patients with sepsis who received physiological-dose steroid therapy. In the advancing process from sepsis to septic shock, adrenal insufficiency was not frequent as supposed. There was a trend (that did not reach significance) towards a decrease in the mortality rates of the patients with sepsis who received physiological-dose steroid therapy.\n\nBODY.INTRODUCTION:\nThis paper was presented at the 10th European Congress of Clinical Microbiology and Infectious Diseases (28–31 May 2000, Stockholm, Sweden). Sepsis can be defined as a systemic response to infection [1]. The incidence of sepsis worldwide is on the increase. Sepsis and its sequels are the leading causes of death in intensive care units. Mortality rates are higher for patients with pre-existing disease, medical conditions, care in the intensive care unit, and multiple organ failure [2,3]. Despite steady improvements in antibiotic therapy and intensive care management during the past decade, mortality has remained close to 50%. This high mortality rate has continued to stimulate interest in pharmacological agents that might reduce morbidity and mortality [4,5,6,7]. Steroid therapy in patients with sepsis is still controversial. In the 1960s, stress doses of hydrocortisone for the treatment of sepsis were investigated, but no advantages could be shown in a double-blind multicenter study [8]. This led to the discontinuation of steroid replacement therapy for sepsis. In the 1970s, therapy with pharmacological doses of glucocorticoids was widely used in patients with sepsis and septic shock. The most compelling evidence in favor of corticosteroid treatment was reported by Schumer [9] in his prospective randomized study of steroid administration to patients with septic shock. These data indicate that methylprednisolone (30 mg/kg) or dexamethasone (3 mg/kg) reduced the mortality rate from 38.4% to 10.5%. However, later in the mid-1980s, pharmacological doses of glucocorticoids for the treatment of sepsis were investigated extensively until several clinical trials gave negative results [10,11,12]. Moreover, there is some evidence that the use of high-dose glucocorticoids in sepsis might be harmful [11]. Many studies have demonstrated that elevated cortisol levels in sepsis and the degree of elevation are related to the severity of illness [13]. Basal and corticotropin (ACTH)-stimulated cortisol levels correlate with the severity of illness, and very high cortisol levels often signify a poor prognosis [14]. In sepsis, the hypothalamic–pituitary–adrenal axis is activated through systemic and neural pathways. Circulating cytokines such as tumor necrosis factor α, interleukin-1 and interleukin-6 activate the hypothalamic–pituitary–adrenal axis independently and, when combined, have synergistic effects [15]. Sepsis can also cause adrenal insufficiency (AI), which is associated with increased mortality [16]. In recent years, several authors have proposed a syndrome of occult AI in septic shock in the presence of normal or even elevated serum cortisol concentrations. This hypothesis is based on many studies investigating the adrenocortical response of patients with septic shock to 0.25 mg of ACTH. Up to 28% of seriously ill patients have been suggested to have occult or unrecognized AI [14]. The prevalence of occult AI (a cortisol increment after a short ACTH test of less than 9 mg/dl) in severe sepsis was estimated at about 50% and the 28-day mortality rate at about 75% [17]. A few studies have indicated that stress doses of hydrocortisone improve hemody-namics in patients with hyperdynamic septic shock, which is unresponsive to conventional therapy [18,19]. However, the use of a physiological dose of steroid in patients with sepsis, severe sepsis, and septic shock has not yet been completely evaluated. We therefore performed a placebo-controlled, randomized, double-blind, single-center study. The aim of this study was to assess basal cortisol concentrations and the cortisol response to ACTH stimulation as well as their prognostic importance, and also to determine the effects of the physiological-dose steroid therapy on mortality in patients with sepsis.\n\nBODY.METHODS.STUDY DESIGN:\nThe study protocol was approved by the Institutional Review Board of Erciyes University and informed consent was obtained from the patients' relatives. This placebo-controlled, randomized, double-blind, single-center study was performed at the Department of Medical Intensive Care Unit and the Department of Infectious Diseases of Erciyes University Medical School during a 2-year period (from May 1997 to April 1999).\n\nBODY.METHODS.PATIENT SELECTION:\nPatients over 17 years old and diagnosed with sepsis were included consecutively in the study. The diagnosis of sepsis was based on the definition of the American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference Report [1]. The severity of illness was classified according to this definition (Table 1). Table 1 Consensus conference group definitions of the stages of sepsis [ 1 ] I. Systemic inflammatory response syndrome (SIRS) Two or more of the following: Temperature of more than 38°C or less than 36°C Heart rate of more than 90/min Respiratory rate of more than 20/min White blood cell count of more than 12,000/mm 3 or less than 4000/mm 3 or more than 10% immature forms (bands) II. Sepsis Systemic inflammatory response syndrome plus a culture-documented infection III. Severe sepsis Sepsis plus organ dysfunction, hypotension, or hypoperfusion (including but not limited to lactic acidosis, oliguria, or acute alteration in mental status) IV. Septic shock Hypotension (despite fluid resuscitation) plus hypoperfusion abnormalities Criteria for exclusion from the study were as follows: already known pre-existing adrenal disease or adrenalectomy, known malignancies, tuberculosis that might have involved the adrenal gland, and administration of steroids within the 3 months before the admission. In addition, patients with burns, hemorrhagic shock or those who had suffered myocar-dial infarction were not included.\n\nBODY.METHODS.TREATMENT PROTOCOL:\nPatients enrolled in the study were treated with standard therapy used in the treatment of sepsis and septic shock. This therapy could include the following: administration of antibiotics, fluid replacement, vasoactive drugs, mechanical ventila-tory support, and any other form of supportive therapy deemed necessary by the primary physicians. Soon after the presumptive diagnosis of severe sepsis, initial laboratory specimens were obtained and within 2 hours the patients were randomized to treatment with prednisolone or placebo groups. The treatment groups were determined by a computer-generated randomization procedure. The steroid therapy group received prednisolone at a physiological dose. Prednisolone was given intravenously at 06.00 (5 mg) and 18.00 (2.5 mg) for 10 days. The standard therapy group received a placebo infusion containing physiological saline solution in an identical manner. Patients and their primary physicians were blinded as to which therapy was administered.\n\nBODY.METHODS.DATA COLLECTION:\nAn ACTH stimulation test was performed with 250 g of tetracosactrin (synacthene; Ciba Geigy, Germany) given intravenously. Blood samples were taken immediately before the test and at 30 and 60 minutes afterwards. After centrifugation, plasma samples were stored at -20°C until assayed. ACTH stimulation test was repeated on the 14th day in the patients who survived. Plasma cortisol concentrations were determined by radioimmunoassay with a commercially available kit (ICN Biomedicals, Inc, Costa Mesa, California, USA). Intra-assay coefficients of variation were for control A 7.0%, control B 5.8%, and control C 5.1%. Inter-assay coefficients of variation were for control A 7.9%, control B 6.5%, control C 6.0%. The cortisol response was defined as the difference between the basal and peak cortisol concentrations. Normal adrenal function was defined as a plasma cortisol level (before or at 30 or 60 minutes after the injection of ACTH) above 20 μg/dl. The cases with peak cortisol levels lower than 20 μg/dl were considered to be AI [13,20,21,22]. Occult AI was defined as an increase in cortisol after a ACTH test of less than 9 g/dl (a cortisol response of no more than 9 μg/dl) [17,23,24]. Community-acquired sepsis had its onset within 72 hours of the patients' admission to the hospital, whereas hospital-acquired sepsis began 72 hours or later after admission. The estimated prognosis of any pre-existing underlying diseases had been classified according to the classification of McCabe and Jackson [25]. Observed initial findings that related to disseminated intravas-cular coagulation, respiratory insufficiency, altered mental status, and renal, cardiac, and liver failure were noted [26,27]. The severity of the illness was assessed with the Acute Physiology and Chronic Health Evaluation II (APACHEII) scoring system [28]. The Sequential Organ Failure Assessment (SOFA) score [29] was added to the study protocol by amendment and retrospectively from the raw data. Infections were diagnosed according to clinical and microbiological criteria. Blood samples for cultures were obtained by the same investigator (O Yildiz) and inoculated into a standard culture medium (BACTEC 9240). Patients were evaluated at enrollment and at the 24th hour, and on the 3rd, 7th, 10th, 14th, 21st, and 28th days, and followed for 1 month after discharge from hospital. Body temperature, respiratory rate, heart rate, blood pressure, the use of vasopressor drugs, urine output, complete blood counts, urinalysis with microscopic examination, erythrocytes sedimentation rate, blood chemistry, prothrombin time, partial thromboplastin time, fibrinogen, fibrin debranch-ing product, blood gases, electrocardiogram, and chest roentgenogram were recorded for each patient individually.\n\nBODY.METHODS.STUDY ENDPOINTS:\nThe primary endpoint of the study was 28-day mortality from all causes. The secondary endpoint consisted of adverse occurrences including possible complications of drug therapy and morbid events such as the progression of initial infection and the development of secondary infection. Secondary infection was defined as the identification of a new site of infection or the emergence of a different organism at the same site, generally requiring a change in antibiotic management. All medications given to the patients, any complications, the duration of hospitalization, the mortality rate, and the causes of death were recorded and compared between the groups. We also compared average values of basal cortisol, peak cortisol and cortisol responses to ACTH between survivors and non-survivors on the first day. Moreover, basal cortisol levels on the 1st and 14th days were compared in patients who recovered in each group.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nStudent's t-test and the Mann-Whitney U multivariate analysis test were used for continuous variables, χ2 and Fisher's χ2 tests were used for proportions, and logistic regression was used for effects of factors on mortality. Values are expressed as means ± SD, odds ratio (OR) and 95% confidence interval (CI) or proportion.\n\nBODY.RESULTS.DESCRIPTION OF STUDY POPULATION:\nForty patients with sepsis, severe sepsis and septic shock were included in this study. The mean age of patients was 56.5 ± 16.4 years in the standard therapy group and 57.8 ± 17.7 years in the steroid therapy group. There was no significant difference in demographic characteristics, the severity of underlying diseases, the APACHE II and SOFA scores, the median time to hospital and median time to death, the acquisition of infection, and the sepsis categories between the groups (Table 2). Table 2 Characteristics of patients Variable Steroid therapy group Standard therapy group Total χ 2 P All patients 20 (50%) 20 (50%) 40 0.417 0.748 Female 7 (43.8%) 9 (56.3%) 16 Male 13 (54.2%) 11 (45.8%) 24 Mean age (years) (mean ± SD) 57.8 ± 17.7 56.5 ± 16.4 57.1 ± 16.9 0.818 Underlying diseases 11 (40.7%) 16 (59.3%) 27 2.849 0.176 Rapidly fatal - - - Ultimately fatal 2 (50%) 2 (50%) 4 Nonfatal 9 (39.1%) 14 (60.9%) 23 No underlying diseases 9 (69.2%) 4 (30.8%) 13 APACHE II (mean ± SD) 15.4 ± 5.5 17.9 ± 8.0 16.6 ± 6.9 0.249 Maximum SOFA (mean ± SD) 7.8 ± 3.9 9.3 ± 4.0 8.5 ± 4.0 0.257 Acquisition of infection 0.143 1.000 Community 16 (51.6%) 15 (48.4%) 31 Hospital 4 (44.4%) 5 (55.6%) 9 Sepsis categories 0.456 0.531 Sepsis 8 (57.1%) 6 (42.9%) 14 Severe sepsis 8 (47.1%) 9 (52.9%) 17 Septic shock 4 (44.4%) 5 (55.6%) 9 Positive cultures 2.345 >0.05 Gram-negative infection 4 (36.4%) 7 (63.6%) 11 Gram-positive infection 10 (66.7%) 5 (33.3%) 15 Candida infection 1 (33.3%) 2 (66.7%) 3 Polymicrobial infection 3 (60.0%) 2 (40.0%) 5 Negative cultures 6 (46.2%) 7 (53.8%) 13 Organ failure 16 (47.1%) 18 (52.9%) 34 DIC 5 (45.5%) 6 (54.5%) 11 Laboratory measurements (mean ± SD) Leukocytes (/mm 3 ) 16,057 ± 12,568 14,781 ± 7,317 0.697 Platelets (/mm 3 ) 183,050 ± 137,983 192,800 ± 127,157 0.324 ESR (mm/h) 69 ± 36 49 ± 33 0.081 CRP (mg/l) 67 ± 16 67 ± 24 0.981 Fibrinogen (mg/dl) 814 ± 394 434 ± 198 0.002 Median stay in hospital (days) 14 (95% CI 11.09–20.08) 13 (95% CI 10.13–16.37) 0.406 Median time to death (days) 5 (95% CI, 2.55–8.20) 5.5 (95% CI, 2.19–13) 0.496 Non-survivors 8 (40%) 12 (60%) 20 1.600 * 0.343 Survivors 12 (60%) 8 (40%) 20 * Fisher's χ 2 test. CI, confidence interval; DIC, disseminated intravascular coagulation; ESR, erythrocytes sedimentation rate. \n\nBODY.RESULTS.THE 28-DAY MORTALITY:\nThere were eight deaths (40%) in the steroid therapy group and 12 (60%) deaths in the standard therapy group (P = 0.343). The mortality rate in the patients with hospital-acquired sepsis was higher than that in the patients with community-acquired sepsis in both groups, but this difference was not statistically significant. The relationship between mortality and age, the presence of an underlying disease, vasopressor and steroid therapy, and APACHE II and SOFA scores was assessed. Age, the presence of an underlying disease, steroid therapy, basal plasma cortisol levels, and cortisol response to ACTH below 9 g/dl were not associated with mortality. Although a univariate analysis found vasopressor therapy (OR 4.64, 95% CI 1.02–21.00), APACHE II (OR 1.18, 95% CI 1.04–1.34) and maximum SOFA (OR 1.63, 95% CI 1.23–2.16) to be effective on mortality, only the SOFA score was found related to mortality, independently of other factors (OR 2.09, 95% CI 1.01–4.30) in a multivariate analysis (Fig. 1 and Table 3). Figure 1Comparison of mortality rates for (a) basal cortisol (μg/dl), (b) cortisol response to ACTH (μg/dl), (c) APACHE II and (d) maximum SOFA scores in both groups. Table 3 Average values (means ± SD) of basal cortisol, peak cortisol, cortisol responses to corticotropin, APACHE II and maximum SOFA scores according to survivors and non-survivors in both groups on the first day Steroid therapy group Standard therapy group Variable Non-survivors ( n = 8) Survivors ( n = 12) P Non-survivors ( n = 12) Survivors ( n = 8) P Basal cortisol level (μg/dl) 44.4 ± 24.8 52.5 ± 30.5 0.536 60.5 ± 34.7 40.3 ± 19.9 0.154 Peak cortisol level (μg/dl) 73.1 ± 35.1 79.2 ± 33.5 0.699 80.7 ± 45.9 60.1 ± 25.7 0.265 Cortisol response (μg/dl) 28.7 ± 22.1 21.2 ± 32.7 0.579 16.3 ± 24.6 18.2 ± 16.3 0.835 APACHE II score 17.6 ± 4.5 13.8 ± 5.8 0.130 21 ± 7.9 13.3 ± 6 0.024 Maximum SOFA score 10.6 ± 3.2 5.9 ± 3.3 0.009 11.4 ± 2.9 6 ± 3.3 0.003 APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, Sequential Organ Failure Assessment. \n\nBODY.RESULTS.VARIATIONS OF TREATMENT EFFECTS ON THE 28-DAY MORTALITY AMONG SUBGROUPS:\nThe mortality rates among patients with a cortisol level over 60 μg/dl were 29% (2 of 7) in the steroid therapy group and 78% (7 of 9) in the standard therapy group (χ2 = 3.874, P = 0.126). Although there was an increase in the level of basal cortisol in patients with sepsis, 14 of 40 patients (35%) had occult AI. The mortality rates in patients with occult AI were 40% (2 of 5) in the steroid therapy group and 55.6% (5 of 9) in the standard therapy group, respectively (χ2 = 0.311, P = 1). Only one patient had both basal and peak cortisol levels lower than 20 g/dl; this patient was in the standard therapy group and died on the 7th day of treatment. This case was accepted as adrenal failure. A comparison of mortality rates for basal cortisol and cortisol responses to ACTH in both groups is shown in Figure 1. The median time to death was 5 days (95% CI 2.55–8.20) in the steroid therapy group and 5.5 days (95% CI 2.19–13) in the standard therapy group (P = 0.496). The reason for death in all patients was attributed to sepsis. The median stay in hospital was 14 days (95% CI 11.09–20.08) in the steroid therapy group and 13 days (95% CI 10.13–16.37) in the standard therapy group (P = 0.406).\n\nBODY.RESULTS.SEPSIS-RELATED ORGAN DYSFUNCTION:\nOrgan dysfunction and failure rates of the patients on admission were 40% and 45% in the steroid therapy and the standard therapy groups, respectively (Table 4). No statistically significant difference was found between the changes in the percentage of SOFA scores of the steroid therapy group (43.1 ± 26.5%) and the standard therapy group (45.4 ± 12.7%) in survivors (P = 0.624). Table 4 Organ failure in groups at baseline Condition Steroid therapy group Standard therapy group Total Renal failure 6 (50%) 6 (50%) 12 Liver failure 9 (52.9%) 8 (47.1%) 17 Altered mental status 10 (38.5%) 16 (61.5%) 26 DIC 5 (45.5%) 6 (54.5%) 11 Cardiac failure 0 (0%) 2 (100%) 2 Respiratory failure 2 (28.6%) 5 (71.4%) 7 DIC, disseminated intravascular coagulation. \n\nBODY.RESULTS.CORTISOL LEVELS AND CORTISOL RESPONSES TO ACTH:\nBasal and peak cortisol levels and cortisol responses to ACTH in the steroid therapy group were not significantly different from those in the standard therapy group (Table 5). The average values of basal cortisol, peak cortisol and cortisol responses to ACTH in survivors and non-survivors in both groups for the first day are shown in Table 3. There was no significant difference between the values for survivors and non-survivors. The mean basal cortisol level was 47.6 ± 26.9 g/dl on the first day in all survivors and 17.2 ± 8.6 g/dl on the 14th day in patients who recovered (P = 0.0003). In the steroid therapy group, the mean basal cortisol level was 52.5 ± 30.5 g/dl on the first day and 17.6 ± 10.3 g/dl on the 14th day (P = 0.003). In the standard therapy group, the mean basal cortisol level was 40.3 ± 19.9 g/dl on the first day and 16.4 ± 4.6 g/dl on the 14th day (P = 0.028) (Table 6). Table 5 Average values (means ± SD) of basal cortisol, peak cortisol and cortisol responses of all patients on the first day Variable Steroid therapy group ( n = 20) Standard therapy group ( n = 20) P Basal cortisol level (μg/dl) 49.3 ± 28 52.4 ± 30.8 0.737 Peak cortisol level (μg/dl) 76.8 ± 33. 3 72.5 ± 39.6 0.712 Cortisol response (μg/dl) 24.2 ± 28.5 17.1 ± 21.2 0.376 Table 6 Comparison of basal cortisol levels (μg/dl; means ± SD) on the 1st and 14th days in recovering patients Steroid therapy group Standard therapy group Total 1st day 14th day P 1st day 14th day P 1st day 14th day P Cortisol 52.5 ± 30.5 17.6 ± 10.3 0.003 40.3 ± 19.9 16.4 ± 4.6 0.028 47.6 ± 26.9 17.2 ± 8.6 0.0003 \n\nBODY.RESULTS.ADVERSE EVENTS:\nThere were no adverse effects due to steroid therapy. Only one patient, in the standard therapy group, had a secondary infection.\n\nBODY.DISCUSSION:\nSepsis is a severe and life-threatening disease. Septic shock is associated with a mortality rate of more than 50%. Despite improvements in the overall management of such patients, including intensive fluid resuscitation, broad-spectrum antibiotic therapy, and life-support devices, mortality rates have not improved during the past decade [4]. In this randomized, double-blind study of the physiological dose of intravenous prednisolone or placebo in the treatment of sepsis, we observed an important difference in mortality rates between both groups. The mortality rates were 40% in the steroid therapy group and 60% in the standard treatment group. There was a trend towards a decrease in the mortality rate of the patients with sepsis who received physiological-dose steroid therapy. However, the differences were not statisti-μ cally significant (P = 0.343). Several factors were suspected to be associated with mortality in sepsis [30,31]. Annane et al. [17] reported that SOFA score, high plasma cortisol levels and weak response of cortisol to ACTH were also associated with mortality. However, we found that only SOFA score was related to mortality, independently of other factors (OR 2.07, 95% CI 1.02–4.22) in the multivariate analysis. Briegel et al. [19] showed that stress doses of hydrocorti-sone reduce the time for the reversal of shock, the number of organ system failures, and the length of mechanical ventilation in patients with septic shock. This finding underlines the fact that an impaired adrenocortical function contributes to vascular hyporesponsiveness in septic shock. In contrast to other studies, our study was performed in patients with sepsis, severe sepsis and septic shock with the use of physiological-dose prednisolone with or without vasopressor support. Because the administration of prednisolone does not affect the circadian adrenocortical patterns and the results of an ACTH stimulation test in adults, we preferred to use this drug [32]. In this study we did not completely evaluate the effect of the steroid on patients with vasopressor-dependent septic shock. However, in a few patients, a physiological dose of prednisolone reduces the time for the reversal of shock as defined by the cessation of vasopressor therapy. It is known that the plasma cortisol level increases in critical illnesses and that basal cortisol levels have a positive correlation with severity of illness and prognosis. However, some investigators [16,23,33,34,35] showed that patients with sepsis and high baseline cortisol levels had a lower cortisol response to the ACTH stimulation test. Our entire study group had higher mean random basal and stimulated cortisol levels than those seen in outpatients. The difference between the basal cortisol levels on the 1st and 14th days in patients who recovered was statistically significant (P < 0.0001) (Table 5). We have also shown that basal cortisol levels correlate with the severity of the illness, and that very high corti-sol levels signify a poor prognosis in the standard therapy group (Fig. 1). Clinically significant AI is unusual in outpatients. The studies of adrenal function in critically ill patients report conflicting incidences of AI ranging from 0% to 28% [35,36,37]. Our results show a low incidence of AI in septic patients. One of our 40 patients had abnormal basal and peak cortisol levels (less than 20 g/dl) and died. This condition was evaluated as AI. In the advancing process from sepsis to septic shock, we concluded that AI was not so frequent as supposed. In recent years, several authors have proposed a syndrome of occult AI in septic shock in the presence of normal or even elevated serum cortisol concentrations. Annane et al. [17] reported that 50% of patients with severe sepsis had occult AI. In the present study, 14 (35%) of the 40 patients had a subnormal cortisol response (occult AI). Five patients in the steroid treatment group had an inadequate cortisol response; two of them died. However, five of nine patients in the standard treatment group who showed inadequate cortisol response died. No statistical differences were observed in mortality rates between the patients who had occult AI (χ2 = 0.311, P = 1) (Fig. 1). In conclusion, a physiological dose of intravenous pred-nisolone had a tendency towards a decrease in mortality in the patients with sepsis, but the difference between the two groups was not significant. Because our study had a small sample size and showed heterogeneity in population in terms of clinical severity, treatment with physiological-dose steroid in sepsis should be evaluated in a larger group of patients.\n\nBODY.KEY MESSAGES:\n· There was an increase in the level of basal cortisol in patients with sepsis. · In the advancing process from sepsis to septic shock, adrenal insufficiency was not so frequent as supposed. · Sepsis can cause occult adrenal insufficiency in the presence of normal or even elevated serum cortisol concentrations. · Physiological-dose prednisolone therapy had a tendency towards a decrease in mortality in the patients with sepsis.\n\nBODY.COMPETING INTERESTS:\nNone declared.\n\nBODY.ABBREVIATIONS:\nAI = adrenal insufficiency; APACHE = Acute Physiology and Chronic Health Evaluation; ACTH = corticotropin; CI = confidence interval; OR = odds ratio; SOFA = Sequential Organ Failure Assessment.\n\n**Question:** Compared to received only standard therapy for sepsis what was the result of received standard therapy for sepsis and physiological-dose steroid therapy for 10 days; on The 28-day mortality?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
559
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of lifestyle physical activity on perceived symptoms and physical function in adults with fibromyalgia: results of a randomized trial\n\n ABSTRACT.INTRODUCTION:\nAlthough exercise is therapeutic for adults with fibromyalgia (FM), its symptoms often create obstacles that discourage exercise. We evaluated the effects of accumulating at least 30 minutes of self-selected lifestyle physical activity (LPA) on perceived physical function, pain, fatigue, body mass index, depression, tenderness, and the six-minute walk test in adults with FM.\n\nABSTRACT.METHODS:\nEighty-four minimally active adults with FM were randomized to either LPA or a FM education control (FME) group. LPA participants worked toward accumulating 30 minutes of self-selected moderate-intensity LPA, five to seven days per week, while the FME participants received information and support.\n\nABSTRACT.RESULTS:\nSeventy-three of the 84 participants (87%) completed the 12-week trial. The LPA group increased their average daily steps by 54%. Compared to FME, the LPA group reported significantly less perceived functional deficits (P = .032) and less pain (P = .006). There were no differences between the groups on the six-minute walk test (P = .067), fatigue, depression, body mass index, or tenderness.\n\nABSTRACT.CONCLUSIONS:\nAccumulating 30 minutes of LPA throughout the day produces clinically relevant changes in perceived physical function and pain in previously minimally active adults with FM.\n\nABSTRACT.TRIAL REGISTRATION:\nclinicaltrials.gov NCT00383084\n\nBODY.INTRODUCTION:\nFibromyalgia (FM) is a chronic, multidimensional disorder characterized by persistent, widespread body pain and tenderness [1]. FM is estimated to occur in 2% of the U.S. general population, affecting about eight times more women than men [2,3]. Symptoms associated with FM include body pain, fatigue, sleep disruption, headache, memory or concentration problems, mood disturbances, and irritable bowel syndrome [4]. FM often substantially hampers day-to-day functioning and is a primary cause of disability [5]. Even with the recent Food and Drug Administration approval of medications to treat FM, pharmacotherapy generally produces modest and inconsistent benefits on symptoms, functioning, and quality of life [6]. As such, nonpharmacologic treatments, such as exercise and cognitive-behavioral interventions, are recommended to assist people with FM to better manage the array of symptoms and functional deficits [6]. Although exercise has been shown to be beneficial [for example, [7]], the symptoms of FM often create obstacles that deter many from exercising consistently enough to derive benefits [8]. Thus, finding new ways to promote increased physical activity in persons with FM that can be sustained overtime is important. One promising approach is to ask people with FM to increase their lifestyle physical activity (LPA). LPA involves working toward meeting the U.S. Surgeon General's 1996 Physical Activity Recommendations of accumulating at least 30 minutes, above one's usual activity, of moderate-intensity physical activity five to seven days a week by integrating short bouts of activity into the day, such as increasing the amount of walking, performing more yard work, using the stairs and so on [9-11]. Although it is unclear whether a continuous 30 minute bout of physical activity is superior to accumulating smaller (10- to 15-minute) bouts of activity with regard to health outcomes, asking people with FM to accumulate small bouts of physical activity throughout the day, as opposed to being active for 30 consecutive minutes, might be less taxing and therefore easier to initiate and sustain over time. In a pilot study [12], we found that small bouts of LPA promoted a 70% increase in physical activity in FM. However, in that small study LPA did not produce significant benefits on pain, fatigue, or perceived physical function compared to controls. As part of an ongoing randomized trial designed to investigate the effects of LPA on ambulatory reports of physical activity, pain and fatigue, as well as measures of fitness, pain threshold and pain tolerance, we also collected questionnaire-based data on these variables. This paper presents the results on questionnaire-based assessments of perceived physical function, pain, fatigue and depression, as well as tenderness and aerobic endurance after 12 weeks of LPA in minimally active adults with FM.\n\nBODY.MATERIALS AND METHODS.PARTICIPANTS:\nParticipants were 92 adults (88 women and 4 men) aged 18 years or older who met American College of Rheumatology diagnostic criteria for FM [13]. The mean (SD) age of participants was 47.7 ± 10.7 years and 80% were white. The mean duration of FM was 7.5 ± 6.2 years. At enrollment, participants were not meeting the US Surgeon General's 1996 recommendation for physical activity [11] for the previous six months (that is, not engaging in either moderate-intensity physical activity for ≥ 30 minutes on ≥ five days per week or vigorous physical activity ≥ three times per week for ≥ 20 minutes each time during the previous month). Persons with acute or chronic medical conditions that could preclude active participation (for example, cancer, coronary artery disease) were excluded from the trial. We also excluded those who intended to change medications that might affect mood, those who intended to seek professional treatment for anxiety or depression during the study period, and those who were unwilling to make the required time commitment. Participants were recruited from the Johns Hopkins Arthritis Center, affiliated Johns Hopkins Rheumatology clinics, by advertisements in the Arthritis Foundation Maryland Chapter newsletter, newspaper advertisements, and via clinical trial recruitment websites, including clinicaltrials.gov. All participants completed baseline testing which included a series of questionnaires, a tender point examination, and a six-minute walk test. At baseline, participants also wore a waist-mounted pedometer (AccuSplit Eagle 1020, Livermore, CA, USA) for seven days (recalibrating it each morning) and recorded their daily step count. These data were used to calculate the mean steps per day as an estimate of physical activity. This study was approved by the Institutional Review Board of Johns Hopkins University School of Medicine, and all participants gave written informed consent.\n\nBODY.MATERIALS AND METHODS.STUDY PROCEDURES:\nParticipants were randomized via a coin flip at a 1:1 allocation ratio to each of the two groups. The group meetings for LPA and FME were held on different days to avoid contact between participants assigned to the different conditions. The interventions did not replace usual medical care and the participants had comparable durations of contact time with study staff (Table 1 summarizes the LPA and FME conditions). Table 1 Description of lifestyle physical activity (LPA) and fibromyalgia education (FME) protocols* Component LPA FME Three, two-hour FM education and support meetings NO YES Physical activity intervention delivered in six, one-hour meetings YES NO Wear pedometer and keep a physical activity log YES NO Prescribed physical activity YES NO Approximately six hours of face-to-face contact time YES YES Topics Covered During the Meetings LPA FME \"Physical Activity & FM\" Described FM (symptoms, diagnosis, treatment)/benefits of physical activity/demonstrated moderate-intensity LE/prescribed LE and self-monitoring/identified and addressed barriers to physical activity (Sessions 1 & 2) \"FM: What is it and how is it diagnosed?\" Presented general information on the symptoms of FM and how it is diagnosed; Discussion and social gathering (Session 1) \"How to Keep Moving\" Discussed progress, effect on symptoms, goal setting, problem solving, importance of self-monitoring, provided feedback, and troubleshooting (Sessions 3 & 4) \"What causes FM?\" Presented the latest information on the causes and consequences of FM; Discussion and social gathering (Session 2) \"Now It's Up To You\" Planned for setbacks & developed strategies to overcome them, set long-term goals, self-monitoring over the long-term (Sessions 5 & 6) \"Treating FM\" Discussed of medical and non-medical approaches, including exercise, to treating FM; Discussion and social gathering (Session 3) LPA Accumulate ≥ 30 minutes of self-selected physical activity five to seven days per week FME Did not alter their characteristic level of physical activity *FM, fibromyalgia; FME = fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.MATERIALS AND METHODS.LIFESTYLE PHYSICAL ACTIVITY (LPA):\nParticipants assigned to LPA attended six, 60-minute group sessions over 12 weeks. Delivered by one of the authors (KRF), the LPA protocol was identical to the one developed for our pilot study [12] and was loosely based on Active Living Every Day, a cognitive-behavioral physical activity promotion program developed by Dr. Steven Blair and colleagues at the Cooper Aerobics Center [14]. The LPA protocol addressed FM-specific challenges to becoming more physically active (that is, dealing with pain and fatigue, fear that physical activity will promote a flare) and discussed how LPA successfully addresses them. The goal of the LPA intervention was to increase moderate-intensity physical activity by helping participants find ways to accumulate short bouts of physical activity throughout the day. Participants were asked to gradually work their way up to meeting the Surgeon General's 1996 recommendation of accumulating 30 minutes, above usual activity, of moderate-intensity LPA five to seven days each week. At the first session, participants were taught how to perform their LPA at moderate-intensity (that is, you will be breathing heavily but not so heavily that you could not hold a conversation). They were also prescribed 15 minutes, above usual level, of accumulated moderate-intensity LPA five to seven days a week, and asked to increase the daily duration of LPA by five minutes each week. The five-minute increase in the daily duration of LPA was based on findings from our pilot study [12] and was well-tolerated by the majority of participants. Thus, by Week 5, most participants were accumulating 30 minutes, above their usual level, of LPA five to seven days a week. Participants were free to accumulate more than 30 minutes of LPA five to seven days per week, if desired. During subsequent sessions participants were taught self-monitoring of LPA, goal setting, dealing with symptom flares, problem solving strategies to overcome barriers to being more physically active, as well as instruction in finding new ways to integrate short bouts of LPA into their daily lives. Feedback focused on whether participants achieved the prescribed level of LPA, as well as the LPA's influence on symptoms. Participants wore the waist-mounted pedometers to record their steps each day (that is, as an assessment of adherence to LPA). Participants were shown how to use the pedometer, where to place it, and how to record their steps on a step diary form. At the end of each day they recorded their steps on a diary form and zeroed their pedometer so they could record their steps for the next day. They also kept a diary that categorized the types of LPA's they engaged in (for example, garden/outdoor activity, household activity, leisure activity). The step count data and diary entries were collected at each intervention session.\n\nBODY.MATERIALS AND METHODS.FIBROMYALGIA EDUCATION (FME):\nParticipants assigned to the FME group met monthly for three months. FME was a minimal intervention with each session divided into three components: (1) education (45 to 60 minutes), (2) question and answer (20 to 30 minutes), and (3) social support (20 to 30 minutes). Conducted by an experienced FM support group facilitator, these 90- to 120-minute sessions presented information on the symptoms, diagnosis, and treatment of FM. The rationale for FME was to provide education and to control for the effects of being enrolled in a clinical trial and receiving increased attention and social support. Moreover, by providing a minimal intervention, as opposed to a standard care control, we anticipated enhancing retention. The final session of FME presented information on exercise and physical activity, but no specific recommendations or prescription concerning exercise was given. To avoid the possibility that wearing a pedometer would increase their physical activity, FME participants only wore one for the baseline and post-testing assessments.\n\nBODY.MATERIALS AND METHODS.OUTCOMES MEASURES:\nThe following outcome measures were collected at baseline and after the 12-week intervention.\n\nBODY.MATERIALS AND METHODS.PRIMARY OUTCOME.PERCEIVED PHYSICAL FUNCTION:\nPerceived physical function was assessed using the Fibromyalgia Impact Questionnaire (FIQ) total score. The FIQ is a well-validated 10-item questionnaire that measures aspects of physical functioning in patients with FM [15]. The FIQ is scored so that higher scores are indicative of poorer functioning. Test-retest reliability ranged from .56 to .95 and construct validity relative to tender points was acceptable (rs = approximately .56) [15].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.PAIN:\nPain was assessed using a 100 mm Visual Analogue Scale (VAS) where participants rated their current level of pain, ranging from 0 (no pain) to 100 (worse pain imaginable).\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.FATIGUE:\nThe Fatigue Severity Scale (FSS) [16] was used to assess the current level of fatigue. The FSS is a nine-item questionnaire, answered on a seven-point scale, ranging from strongly agree to strongly disagree. The FSS has good internal consistency (Cronbach's alpha = .81) and correlates with VAS fatigue measures (r = .68) [16].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.DEPRESSION:\nDepression was assessed using the Center for Epidemiologic Studies Depression Scale (CES-D) [17]. The CES-D contains 20-items rated on a four-point Likert scale ranging from 0 (rarely or none of the time) to 3 (most or all of the time), and measures symptoms during the past week. The CES-D is a widely used measure of depressive symptoms and has acceptable internal consistency (.84 to .90) and validity (r = .56 with clinical rating of depression severity) [17].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.TENDERNESS:\nA digital tender point examination, at the 18 sites specified in the American College of Rheumatology FM classification criteria [13], was completed at baseline and after the intervention. Tender point counts are moderately reliable in classifying the tenderness associated with FM (kappa = .75) and inter-rater agreement on the presence of tenderness through digital examination is .51 [13].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.BODY MASS INDEX (BMI: KG/M:\nWeight and height were recorded at each assessment and these variables were used to calculate BMI, an index of body weight adjusted for height.\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.SIX-MINUTE WALK TEST:\nThe six-minute walk [18] is a measure of aerobic endurance. For this test, participants walked as far they could in six minutes on a preselected course, with the distance walked recorded. The reliability of the six-minute walk test is excellent (r = .91) and it correlates with the FIQ (r = -.49) and is sensitive to change due to exercise in distance walked (+78 m), and VO2 (+1.8 ml/kg/min) [18]. The six-minute walk test was measured at baseline and at post-testing. We expressed the results as meters per second, an index of gait speed.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND DATA ANALYSIS:\nThirty-five adults with FM per group were projected to provide a power of 80% to detect a clinically significant 20% difference between the groups on the FIQ score. Ninety-two participants were enrolled to allow for a 25% post-randomization drop out rate. Baseline data between the two groups were compared using t-tests or Chi Square tests. Changes in perceived physical function, depression, pain, tenderness, fatigue, BMI, and the six-minute walk test were compared between the LPA and FME groups using between-subjects t-tests. Because there was a significant difference between the LPA and FME groups on self-reported duration of FM (see Table 2), as a sensitivity analysis, we adjusted scores of the outcome measures for the duration of FM and replicated the analyses. We also used regression techniques to adjust the outcome measures on the basis of whether or not the participant reported any change in their ongoing FM treatments, either pharmacologic or non-pharmacologic (0 = no change, 1 = change) during the trial. Because the results did not differ as a function of these adjustments, we present the results for the unadjusted outcome variables. Although data from all subjects were analyzed regardless of whether those subjects complied with or remained in treatment, participants with missing data on a particular variable were excluded from that particular analysis. We also performed an analysis among participants who completed the 12-week trial (completers only). Cohen's d effects size estimates [19] were calculated for each difference on the outcome measures between LPA and FME. Analyses were performed using SPSS software, Version 16. Two-tailed P values of < 0.05 were used to denote statistical significance. Table 2 Baseline characteristics of the randomized participants* Characteristic Lifestyle Physical Activity (LPA) Fibromyalgia Education (FME) P value N (%) of participants 46 (55) 38 (45) Age, years 46.4 ± 11.6 49.0 ± 10.2 0.287 Female, N (%) 43 (94) 38 (100) 0.248 Self-reported race, N (%) 0.789 White 36 (78) 31 (82) Non-White 10 (22) 7 (18) Marital status, N (%) 0.519 Married or cohabitating 24 (52) 24 (63) Widowed, divorced, or separated 12 (27) 11 (29) Single 10 (22) 3 (8) Educational level, N (%) 0.454 Postgraduate 9 (20) 5 (13) College graduate 16 (34) 11 (32) Some college 11 (24) 13 (34) High school 10 (22) 8 (21) Employment status, N (%) 0.923 Employed 20 (43) 18 (47) Unemployed or Disabled 11 (24) 9 (24) Retired or Other 15 (33) 11 (29) Years since diagnosis a 5.9 ± 5.1 9.6 ± 6.8 0.007 Steps per day 3,788 ± 2,135 3,071 ± 1,810 0.139 Taking FM medications, N (%) a 31 (82) 40 (87) 0.498 Taking other medications, N (%) a 33 (87) 44 (96) 0.146 * Data for categorical variables are presented as N' and percentages; data for continuous variables are presented as means ± standard deviation. a Obtained from self-report. FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.RESULTS:\nNine participants withdrew after baseline testing but prior to randomization (see Figure 1). We randomized 46 participants into the LPA intervention and 38 into the FME group in five separate cohorts of approximately 8 to 10 per cohort at six-month intervals. (Because the FME facilitator was unavailable, one smaller cohort (N = 4) was comprised of only LPA participants). Selected baseline characteristics of the 84 participants are shown in Table 2. With the exception of duration of FM, the two groups were comparable on age, race, education, employment status, BMI, and the use of medications for FM or for other medical conditions. Figure 1Participant flow. Seventy-three of the 84 participants (87%) completed the 12-week intervention and post-testing. Drop outs were unrelated to randomized treatment assignment (P = .988) and there were no significant differences on any baseline variables between those who dropped out and those who completed post-testing. There was also no difference in the mean percentage of meetings attended by those randomized to the FME (77%) and LPA (72%) groups (P = .542). As shown in Figure 2, the LPA group significantly increased the mean number of daily steps from 3,788 ± 2,135 at baseline to 5,837 ± 1,770 at the final intervention session (P = .001). This represents a 54% increase in the mean number of daily steps over the course of the 12-week intervention. Although walking was the most common form of LPA, other popular forms of LPA included garden/outdoor activity (for example, mowing the lawn, planting flowers, pulling weeds); household activity (for example, cleaning out a closet, vacuuming, doing laundry); and sports activity (for example, cycling, swimming, field hockey). Figure 2Average steps per day (with 95% confidence interval) for the study groups. At baseline, there were no significant differences between the LPA and FME groups on FIQ, pain, fatigue, depression, number of tender points, BMI, and six-minute walk distance (see Table 3). At post-testing, compared to the FME group, the LPA group reported significant reductions in the FIQ score (P = .032; Cohen's d = .53) and in pain (P = .006; Cohen's d = .67). The difference between the LPA and FME groups on the six-minute walk test approached significance (P = .067; Cohen's d = .53). There were no significant differences between the groups on BMI, fatigue, depression, or the number of tender points. The results (data not shown) were not materially altered when the analysis was restricted to only participants who completed the entire 12-week trial (that is, completers only analysis). Table 3 Differences between lifestyle physical activity (LPA) and fibromyalgia education (FME) groups on the primary and secondary study measures a Variable Mean ± SD Mean difference between groups at baseline and at 12-weeks (95% CI) P Value Cohen's d N LPA FME FM impact questionnaire Baseline 84 67.5 ± 12.0 69.7 ± 13.4 2.2 (-3.3 to 7.8) 0.424 .17 Post intervention 73 56.7 ± 20.6 67.0 ± 18.6 10.2 (.91 to 19.6) 0.032 .53 Pain Baseline 84 54.6 ± 25.6 58.9 ± 25.0 4.3 (-7.2 to 14.9) 0.489 .17 Post intervention 73 46.3 ± 24.2 62.4 ± 24.5 16.1 (4.6 to 27.5) 0.006 .67 Fatigue severity scale Baseline 84 51.9 ± 9.3 52.3 ± 9.1 .4 (-3.6 to 4.4) 0.843 .04 Post intervention 73 50.6 ± 9.9 51.4 ± 10.1 .8 (-3.9 to 5.5) 0.727 .07 CES-D* Baseline 84 23.4 ± 8.6 24.0 ± 10 .6 (-3.8 to 4.5) 0.798 .06 Post intervention 73 21.6 ± 9.8 21.2 ± 11.3 .4 (-5.3 to 4.6) 0.888 .04 Number of tender points Baseline 84 16.2 ± 2.3 16.1 ± 3.2 .1 (-1.2 to 1.0) 0.979 .03 Post intervention 72 16.0 ± 2.3 16.8 ± 2.0 .8 (-.35 to 1.9) 0.172 .37 Body mass index (BMI) Baseline 82 31.4 ± 8.4 29.8 ± 6.2 1.6 (-4.7 to 1.7) 0.360 .22 Post intervention 60 31.0 ± 9.0 29.9 ± 6.2 1.1 (-5.3 to 2.9) 0.575 .14 Six-minute walk test, yd/sec Baseline 77 1.08 ± 0.15 1.08 ± 0.19 .0004 (-0.78 to 0.79) 0.991 0 Post intervention 62 1.24 ± 0.28 1.11 ± 0.20 1.21 (-0.25 to 0.008) 0.067 .53 * Center for Epidemiologic Studies Depression Scale; a between-subjects t-tests were used to derive P values; b Cohen's d [ 19 ] effect size estimates ( d = .20 (small effect); d = .50 (medium effect); d = .80 (large effect)). FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.DISCUSSION:\nThe 12-week program, designed to help minimally active adults with FM increase their physical activity by working toward accumulating at least 30 minutes of self-selected moderate-intensity physical activity most days of the week, produced a 54% increase in the average number of steps taken per day. Compared to the FME control group, LPA participants significantly reduced their perceived functional deficits (that is, FIQ score) and pain. Moreover, compared to FME, the LPA participants had a greater improvement on the six-minute walk (expressed as gait speed), although this difference failed to reach statistical significance. The magnitude of the post-intervention differences, expressed as percent change from LPA to FME groups, were 18% for the FIQ score and 35% for the pain VAS score. When expressed as Cohen's d effect sizes these are indicative of medium-sized effects. Moreover, the change on the FIQ score exceeds the minimally clinically important difference of 14% recently identified [20], suggesting that increasing physical activity via LPA produces changes on perceived physical function that are of a relevant magnitude. On the other hand, the effect of LPA on the six-minute walk test was not statistically significant (although it produced a Cohen's d of .53). It is important to note that there was a smaller sample size available for this analysis which reduced statistical power. In general our results are in accord with studies investigating the effects of exercise on people with FM [7,8,21,22]. Specifically, the majority of studies suggest that exercise can produce mild-to-moderate benefits on aerobic endurance, strength, functional status, and quality of life [7,23,24]. However, because the exercise interventions investigated vary so markedly in type (for example, water aerobics, traditional aerobics, T'ai Chi, strength training), frequency, intensity, and duration it is difficult to compare results across studies. One thing seems clear from the FM exercise literature, people with FM have difficulty adhering to exercise. Indeed, in FM clinical exercise trials drop-out rates often nearly exceed 30% [for example, [8,24]] suggesting that developing exercise interventions that can be sustained is perhaps as important a goal as finding the particular interventions that produce optimal benefits. The magnitude of the effects of LPA observed in this study on perceived physical function and pain were similar to those obtained in our smaller pilot study [12]. These effects were also generally consistent with other protocols that involve low-to-moderate intensity exercise, interventions that appear to produce the greatest level of compliance in people with FM [for example, [7,8,24]]. It is important to note that even though the LPA group increased their mean daily steps by 54%, it only moved them, as defined by the pedometer-determined physical activity classifications developed by Tudor-Locke and colleagues [25], from the sedentary (<5,000 steps/day) to the low active (5,000 to 7,499 steps/day) category. Indeed, the mean steps per day at post-testing among the LPA participants were comparable to the mean daily steps observed in patients with progressive neuromuscular disease, and are significantly lower than other special populations such as diabetics, patients undergoing breast cancer treatment, and those with joint replacements [26]. This suggests that, even with the initiation of LPA, people with FM progress only to a relatively low level of physical activity. It is important to note, however, that the trajectory of the mean step count continued to rise over the 12 weeks suggesting that, had the trial continued, their physical activity may have continued to increase. It may be that people with FM require more time to eventually reach physical activity recommendations compared to persons with other chronic conditions. This study has limitations and strengths. First, to minimize attrition and control for the effects of increased attention, participants randomized to the FME group did receive a minimal intervention. Thus, we cannot determine how LPA compares with a traditional no treatment control group. Second, with the exception of BMI, the tender point count and six-minute walk test, the outcomes described herein were derived from self-report and may be influenced by a variety of factors, including those associated with enrollment in a clinical trial. Third, using pedometers to assess LPA is relatively crude and does not quantify other sorts of physical activities that participants may have engaged in such as cycling or water exercise. Fourth, we did not measure muscle strength during the trial so we are unable to determine whether LPA influences strength. Finally, we excluded persons with FM who had other co-morbid conditions such as uncontrolled hypertension or arthritis which may limit the generalizability of our findings. Strengths of this study include the randomized design, a relatively small drop-out rate (13%), the LPA group's adherence to standardized intervention protocol, and the relatively high rates of attendance to the group sessions.\n\nBODY.CONCLUSIONS:\nThe results of this study suggest that promoting increased physical activity by asking persons with FM to accumulate short bouts of activity throughout the day can markedly increase the average number of steps taken per day and produces clinically relevant reductions in perceived functional deficits and pain. However, the LPA intervention only moved the participants from the sedentary to low physical activity category. This suggests that it is essential to encourage FM patients to increase the duration of their physical activity in ways that do not compromise their ability to sustain the increased level of activity over the intermediate- and long-term.\n\nBODY.ABBREVIATIONS:\nBMI: Body Mass Index; CES-D: Center for Epidemiologic Studies Depression Scale; FIQ: Fibromyalgia Impact Questionnaire; FM: fibromyalgia; FME: Fibromyalgia Education Control Group; FSS: Fatigue Severity Scale; LPA: lifestyle physical activity; SD: standard deviation; VAS: Visual Analogue Scale.\n\nBODY.COMPETING INTERESTS:\nKevin R. Fontaine and Lora Conn declare that they have no competing interests. Daniel J. Clauw has acted as a consultant for Pfizer, Lilly, Forest Laboratories, Cypress Biosciences, Pierre Fabre, UCB, and Wyeth, and has received grant support from Pfizer, Cypress Bioscience, and Forest.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nKF conceived of the study, acquired the funding, participated in the design of the study, the delivery of the intervention, performed the statistical analysis, and drafted the manuscript. LC carried out the recruitment, enrollment, and data collection. DC participated in designing the study and assisted with the drafting of the manuscript. All authors read and approved the final manuscript.\n\n**Question:** Compared to fibromyalgia (FM) education control (FME) group - received information and support what was the result of lifestyle physical activity (LPA) - 30 minutes of self-selected moderate-intensity LPA, five to seven days per week on perceived functional deficits?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
562
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of lifestyle physical activity on perceived symptoms and physical function in adults with fibromyalgia: results of a randomized trial\n\n ABSTRACT.INTRODUCTION:\nAlthough exercise is therapeutic for adults with fibromyalgia (FM), its symptoms often create obstacles that discourage exercise. We evaluated the effects of accumulating at least 30 minutes of self-selected lifestyle physical activity (LPA) on perceived physical function, pain, fatigue, body mass index, depression, tenderness, and the six-minute walk test in adults with FM.\n\nABSTRACT.METHODS:\nEighty-four minimally active adults with FM were randomized to either LPA or a FM education control (FME) group. LPA participants worked toward accumulating 30 minutes of self-selected moderate-intensity LPA, five to seven days per week, while the FME participants received information and support.\n\nABSTRACT.RESULTS:\nSeventy-three of the 84 participants (87%) completed the 12-week trial. The LPA group increased their average daily steps by 54%. Compared to FME, the LPA group reported significantly less perceived functional deficits (P = .032) and less pain (P = .006). There were no differences between the groups on the six-minute walk test (P = .067), fatigue, depression, body mass index, or tenderness.\n\nABSTRACT.CONCLUSIONS:\nAccumulating 30 minutes of LPA throughout the day produces clinically relevant changes in perceived physical function and pain in previously minimally active adults with FM.\n\nABSTRACT.TRIAL REGISTRATION:\nclinicaltrials.gov NCT00383084\n\nBODY.INTRODUCTION:\nFibromyalgia (FM) is a chronic, multidimensional disorder characterized by persistent, widespread body pain and tenderness [1]. FM is estimated to occur in 2% of the U.S. general population, affecting about eight times more women than men [2,3]. Symptoms associated with FM include body pain, fatigue, sleep disruption, headache, memory or concentration problems, mood disturbances, and irritable bowel syndrome [4]. FM often substantially hampers day-to-day functioning and is a primary cause of disability [5]. Even with the recent Food and Drug Administration approval of medications to treat FM, pharmacotherapy generally produces modest and inconsistent benefits on symptoms, functioning, and quality of life [6]. As such, nonpharmacologic treatments, such as exercise and cognitive-behavioral interventions, are recommended to assist people with FM to better manage the array of symptoms and functional deficits [6]. Although exercise has been shown to be beneficial [for example, [7]], the symptoms of FM often create obstacles that deter many from exercising consistently enough to derive benefits [8]. Thus, finding new ways to promote increased physical activity in persons with FM that can be sustained overtime is important. One promising approach is to ask people with FM to increase their lifestyle physical activity (LPA). LPA involves working toward meeting the U.S. Surgeon General's 1996 Physical Activity Recommendations of accumulating at least 30 minutes, above one's usual activity, of moderate-intensity physical activity five to seven days a week by integrating short bouts of activity into the day, such as increasing the amount of walking, performing more yard work, using the stairs and so on [9-11]. Although it is unclear whether a continuous 30 minute bout of physical activity is superior to accumulating smaller (10- to 15-minute) bouts of activity with regard to health outcomes, asking people with FM to accumulate small bouts of physical activity throughout the day, as opposed to being active for 30 consecutive minutes, might be less taxing and therefore easier to initiate and sustain over time. In a pilot study [12], we found that small bouts of LPA promoted a 70% increase in physical activity in FM. However, in that small study LPA did not produce significant benefits on pain, fatigue, or perceived physical function compared to controls. As part of an ongoing randomized trial designed to investigate the effects of LPA on ambulatory reports of physical activity, pain and fatigue, as well as measures of fitness, pain threshold and pain tolerance, we also collected questionnaire-based data on these variables. This paper presents the results on questionnaire-based assessments of perceived physical function, pain, fatigue and depression, as well as tenderness and aerobic endurance after 12 weeks of LPA in minimally active adults with FM.\n\nBODY.MATERIALS AND METHODS.PARTICIPANTS:\nParticipants were 92 adults (88 women and 4 men) aged 18 years or older who met American College of Rheumatology diagnostic criteria for FM [13]. The mean (SD) age of participants was 47.7 ± 10.7 years and 80% were white. The mean duration of FM was 7.5 ± 6.2 years. At enrollment, participants were not meeting the US Surgeon General's 1996 recommendation for physical activity [11] for the previous six months (that is, not engaging in either moderate-intensity physical activity for ≥ 30 minutes on ≥ five days per week or vigorous physical activity ≥ three times per week for ≥ 20 minutes each time during the previous month). Persons with acute or chronic medical conditions that could preclude active participation (for example, cancer, coronary artery disease) were excluded from the trial. We also excluded those who intended to change medications that might affect mood, those who intended to seek professional treatment for anxiety or depression during the study period, and those who were unwilling to make the required time commitment. Participants were recruited from the Johns Hopkins Arthritis Center, affiliated Johns Hopkins Rheumatology clinics, by advertisements in the Arthritis Foundation Maryland Chapter newsletter, newspaper advertisements, and via clinical trial recruitment websites, including clinicaltrials.gov. All participants completed baseline testing which included a series of questionnaires, a tender point examination, and a six-minute walk test. At baseline, participants also wore a waist-mounted pedometer (AccuSplit Eagle 1020, Livermore, CA, USA) for seven days (recalibrating it each morning) and recorded their daily step count. These data were used to calculate the mean steps per day as an estimate of physical activity. This study was approved by the Institutional Review Board of Johns Hopkins University School of Medicine, and all participants gave written informed consent.\n\nBODY.MATERIALS AND METHODS.STUDY PROCEDURES:\nParticipants were randomized via a coin flip at a 1:1 allocation ratio to each of the two groups. The group meetings for LPA and FME were held on different days to avoid contact between participants assigned to the different conditions. The interventions did not replace usual medical care and the participants had comparable durations of contact time with study staff (Table 1 summarizes the LPA and FME conditions). Table 1 Description of lifestyle physical activity (LPA) and fibromyalgia education (FME) protocols* Component LPA FME Three, two-hour FM education and support meetings NO YES Physical activity intervention delivered in six, one-hour meetings YES NO Wear pedometer and keep a physical activity log YES NO Prescribed physical activity YES NO Approximately six hours of face-to-face contact time YES YES Topics Covered During the Meetings LPA FME \"Physical Activity & FM\" Described FM (symptoms, diagnosis, treatment)/benefits of physical activity/demonstrated moderate-intensity LE/prescribed LE and self-monitoring/identified and addressed barriers to physical activity (Sessions 1 & 2) \"FM: What is it and how is it diagnosed?\" Presented general information on the symptoms of FM and how it is diagnosed; Discussion and social gathering (Session 1) \"How to Keep Moving\" Discussed progress, effect on symptoms, goal setting, problem solving, importance of self-monitoring, provided feedback, and troubleshooting (Sessions 3 & 4) \"What causes FM?\" Presented the latest information on the causes and consequences of FM; Discussion and social gathering (Session 2) \"Now It's Up To You\" Planned for setbacks & developed strategies to overcome them, set long-term goals, self-monitoring over the long-term (Sessions 5 & 6) \"Treating FM\" Discussed of medical and non-medical approaches, including exercise, to treating FM; Discussion and social gathering (Session 3) LPA Accumulate ≥ 30 minutes of self-selected physical activity five to seven days per week FME Did not alter their characteristic level of physical activity *FM, fibromyalgia; FME = fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.MATERIALS AND METHODS.LIFESTYLE PHYSICAL ACTIVITY (LPA):\nParticipants assigned to LPA attended six, 60-minute group sessions over 12 weeks. Delivered by one of the authors (KRF), the LPA protocol was identical to the one developed for our pilot study [12] and was loosely based on Active Living Every Day, a cognitive-behavioral physical activity promotion program developed by Dr. Steven Blair and colleagues at the Cooper Aerobics Center [14]. The LPA protocol addressed FM-specific challenges to becoming more physically active (that is, dealing with pain and fatigue, fear that physical activity will promote a flare) and discussed how LPA successfully addresses them. The goal of the LPA intervention was to increase moderate-intensity physical activity by helping participants find ways to accumulate short bouts of physical activity throughout the day. Participants were asked to gradually work their way up to meeting the Surgeon General's 1996 recommendation of accumulating 30 minutes, above usual activity, of moderate-intensity LPA five to seven days each week. At the first session, participants were taught how to perform their LPA at moderate-intensity (that is, you will be breathing heavily but not so heavily that you could not hold a conversation). They were also prescribed 15 minutes, above usual level, of accumulated moderate-intensity LPA five to seven days a week, and asked to increase the daily duration of LPA by five minutes each week. The five-minute increase in the daily duration of LPA was based on findings from our pilot study [12] and was well-tolerated by the majority of participants. Thus, by Week 5, most participants were accumulating 30 minutes, above their usual level, of LPA five to seven days a week. Participants were free to accumulate more than 30 minutes of LPA five to seven days per week, if desired. During subsequent sessions participants were taught self-monitoring of LPA, goal setting, dealing with symptom flares, problem solving strategies to overcome barriers to being more physically active, as well as instruction in finding new ways to integrate short bouts of LPA into their daily lives. Feedback focused on whether participants achieved the prescribed level of LPA, as well as the LPA's influence on symptoms. Participants wore the waist-mounted pedometers to record their steps each day (that is, as an assessment of adherence to LPA). Participants were shown how to use the pedometer, where to place it, and how to record their steps on a step diary form. At the end of each day they recorded their steps on a diary form and zeroed their pedometer so they could record their steps for the next day. They also kept a diary that categorized the types of LPA's they engaged in (for example, garden/outdoor activity, household activity, leisure activity). The step count data and diary entries were collected at each intervention session.\n\nBODY.MATERIALS AND METHODS.FIBROMYALGIA EDUCATION (FME):\nParticipants assigned to the FME group met monthly for three months. FME was a minimal intervention with each session divided into three components: (1) education (45 to 60 minutes), (2) question and answer (20 to 30 minutes), and (3) social support (20 to 30 minutes). Conducted by an experienced FM support group facilitator, these 90- to 120-minute sessions presented information on the symptoms, diagnosis, and treatment of FM. The rationale for FME was to provide education and to control for the effects of being enrolled in a clinical trial and receiving increased attention and social support. Moreover, by providing a minimal intervention, as opposed to a standard care control, we anticipated enhancing retention. The final session of FME presented information on exercise and physical activity, but no specific recommendations or prescription concerning exercise was given. To avoid the possibility that wearing a pedometer would increase their physical activity, FME participants only wore one for the baseline and post-testing assessments.\n\nBODY.MATERIALS AND METHODS.OUTCOMES MEASURES:\nThe following outcome measures were collected at baseline and after the 12-week intervention.\n\nBODY.MATERIALS AND METHODS.PRIMARY OUTCOME.PERCEIVED PHYSICAL FUNCTION:\nPerceived physical function was assessed using the Fibromyalgia Impact Questionnaire (FIQ) total score. The FIQ is a well-validated 10-item questionnaire that measures aspects of physical functioning in patients with FM [15]. The FIQ is scored so that higher scores are indicative of poorer functioning. Test-retest reliability ranged from .56 to .95 and construct validity relative to tender points was acceptable (rs = approximately .56) [15].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.PAIN:\nPain was assessed using a 100 mm Visual Analogue Scale (VAS) where participants rated their current level of pain, ranging from 0 (no pain) to 100 (worse pain imaginable).\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.FATIGUE:\nThe Fatigue Severity Scale (FSS) [16] was used to assess the current level of fatigue. The FSS is a nine-item questionnaire, answered on a seven-point scale, ranging from strongly agree to strongly disagree. The FSS has good internal consistency (Cronbach's alpha = .81) and correlates with VAS fatigue measures (r = .68) [16].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.DEPRESSION:\nDepression was assessed using the Center for Epidemiologic Studies Depression Scale (CES-D) [17]. The CES-D contains 20-items rated on a four-point Likert scale ranging from 0 (rarely or none of the time) to 3 (most or all of the time), and measures symptoms during the past week. The CES-D is a widely used measure of depressive symptoms and has acceptable internal consistency (.84 to .90) and validity (r = .56 with clinical rating of depression severity) [17].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.TENDERNESS:\nA digital tender point examination, at the 18 sites specified in the American College of Rheumatology FM classification criteria [13], was completed at baseline and after the intervention. Tender point counts are moderately reliable in classifying the tenderness associated with FM (kappa = .75) and inter-rater agreement on the presence of tenderness through digital examination is .51 [13].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.BODY MASS INDEX (BMI: KG/M:\nWeight and height were recorded at each assessment and these variables were used to calculate BMI, an index of body weight adjusted for height.\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.SIX-MINUTE WALK TEST:\nThe six-minute walk [18] is a measure of aerobic endurance. For this test, participants walked as far they could in six minutes on a preselected course, with the distance walked recorded. The reliability of the six-minute walk test is excellent (r = .91) and it correlates with the FIQ (r = -.49) and is sensitive to change due to exercise in distance walked (+78 m), and VO2 (+1.8 ml/kg/min) [18]. The six-minute walk test was measured at baseline and at post-testing. We expressed the results as meters per second, an index of gait speed.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND DATA ANALYSIS:\nThirty-five adults with FM per group were projected to provide a power of 80% to detect a clinically significant 20% difference between the groups on the FIQ score. Ninety-two participants were enrolled to allow for a 25% post-randomization drop out rate. Baseline data between the two groups were compared using t-tests or Chi Square tests. Changes in perceived physical function, depression, pain, tenderness, fatigue, BMI, and the six-minute walk test were compared between the LPA and FME groups using between-subjects t-tests. Because there was a significant difference between the LPA and FME groups on self-reported duration of FM (see Table 2), as a sensitivity analysis, we adjusted scores of the outcome measures for the duration of FM and replicated the analyses. We also used regression techniques to adjust the outcome measures on the basis of whether or not the participant reported any change in their ongoing FM treatments, either pharmacologic or non-pharmacologic (0 = no change, 1 = change) during the trial. Because the results did not differ as a function of these adjustments, we present the results for the unadjusted outcome variables. Although data from all subjects were analyzed regardless of whether those subjects complied with or remained in treatment, participants with missing data on a particular variable were excluded from that particular analysis. We also performed an analysis among participants who completed the 12-week trial (completers only). Cohen's d effects size estimates [19] were calculated for each difference on the outcome measures between LPA and FME. Analyses were performed using SPSS software, Version 16. Two-tailed P values of < 0.05 were used to denote statistical significance. Table 2 Baseline characteristics of the randomized participants* Characteristic Lifestyle Physical Activity (LPA) Fibromyalgia Education (FME) P value N (%) of participants 46 (55) 38 (45) Age, years 46.4 ± 11.6 49.0 ± 10.2 0.287 Female, N (%) 43 (94) 38 (100) 0.248 Self-reported race, N (%) 0.789 White 36 (78) 31 (82) Non-White 10 (22) 7 (18) Marital status, N (%) 0.519 Married or cohabitating 24 (52) 24 (63) Widowed, divorced, or separated 12 (27) 11 (29) Single 10 (22) 3 (8) Educational level, N (%) 0.454 Postgraduate 9 (20) 5 (13) College graduate 16 (34) 11 (32) Some college 11 (24) 13 (34) High school 10 (22) 8 (21) Employment status, N (%) 0.923 Employed 20 (43) 18 (47) Unemployed or Disabled 11 (24) 9 (24) Retired or Other 15 (33) 11 (29) Years since diagnosis a 5.9 ± 5.1 9.6 ± 6.8 0.007 Steps per day 3,788 ± 2,135 3,071 ± 1,810 0.139 Taking FM medications, N (%) a 31 (82) 40 (87) 0.498 Taking other medications, N (%) a 33 (87) 44 (96) 0.146 * Data for categorical variables are presented as N' and percentages; data for continuous variables are presented as means ± standard deviation. a Obtained from self-report. FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.RESULTS:\nNine participants withdrew after baseline testing but prior to randomization (see Figure 1). We randomized 46 participants into the LPA intervention and 38 into the FME group in five separate cohorts of approximately 8 to 10 per cohort at six-month intervals. (Because the FME facilitator was unavailable, one smaller cohort (N = 4) was comprised of only LPA participants). Selected baseline characteristics of the 84 participants are shown in Table 2. With the exception of duration of FM, the two groups were comparable on age, race, education, employment status, BMI, and the use of medications for FM or for other medical conditions. Figure 1Participant flow. Seventy-three of the 84 participants (87%) completed the 12-week intervention and post-testing. Drop outs were unrelated to randomized treatment assignment (P = .988) and there were no significant differences on any baseline variables between those who dropped out and those who completed post-testing. There was also no difference in the mean percentage of meetings attended by those randomized to the FME (77%) and LPA (72%) groups (P = .542). As shown in Figure 2, the LPA group significantly increased the mean number of daily steps from 3,788 ± 2,135 at baseline to 5,837 ± 1,770 at the final intervention session (P = .001). This represents a 54% increase in the mean number of daily steps over the course of the 12-week intervention. Although walking was the most common form of LPA, other popular forms of LPA included garden/outdoor activity (for example, mowing the lawn, planting flowers, pulling weeds); household activity (for example, cleaning out a closet, vacuuming, doing laundry); and sports activity (for example, cycling, swimming, field hockey). Figure 2Average steps per day (with 95% confidence interval) for the study groups. At baseline, there were no significant differences between the LPA and FME groups on FIQ, pain, fatigue, depression, number of tender points, BMI, and six-minute walk distance (see Table 3). At post-testing, compared to the FME group, the LPA group reported significant reductions in the FIQ score (P = .032; Cohen's d = .53) and in pain (P = .006; Cohen's d = .67). The difference between the LPA and FME groups on the six-minute walk test approached significance (P = .067; Cohen's d = .53). There were no significant differences between the groups on BMI, fatigue, depression, or the number of tender points. The results (data not shown) were not materially altered when the analysis was restricted to only participants who completed the entire 12-week trial (that is, completers only analysis). Table 3 Differences between lifestyle physical activity (LPA) and fibromyalgia education (FME) groups on the primary and secondary study measures a Variable Mean ± SD Mean difference between groups at baseline and at 12-weeks (95% CI) P Value Cohen's d N LPA FME FM impact questionnaire Baseline 84 67.5 ± 12.0 69.7 ± 13.4 2.2 (-3.3 to 7.8) 0.424 .17 Post intervention 73 56.7 ± 20.6 67.0 ± 18.6 10.2 (.91 to 19.6) 0.032 .53 Pain Baseline 84 54.6 ± 25.6 58.9 ± 25.0 4.3 (-7.2 to 14.9) 0.489 .17 Post intervention 73 46.3 ± 24.2 62.4 ± 24.5 16.1 (4.6 to 27.5) 0.006 .67 Fatigue severity scale Baseline 84 51.9 ± 9.3 52.3 ± 9.1 .4 (-3.6 to 4.4) 0.843 .04 Post intervention 73 50.6 ± 9.9 51.4 ± 10.1 .8 (-3.9 to 5.5) 0.727 .07 CES-D* Baseline 84 23.4 ± 8.6 24.0 ± 10 .6 (-3.8 to 4.5) 0.798 .06 Post intervention 73 21.6 ± 9.8 21.2 ± 11.3 .4 (-5.3 to 4.6) 0.888 .04 Number of tender points Baseline 84 16.2 ± 2.3 16.1 ± 3.2 .1 (-1.2 to 1.0) 0.979 .03 Post intervention 72 16.0 ± 2.3 16.8 ± 2.0 .8 (-.35 to 1.9) 0.172 .37 Body mass index (BMI) Baseline 82 31.4 ± 8.4 29.8 ± 6.2 1.6 (-4.7 to 1.7) 0.360 .22 Post intervention 60 31.0 ± 9.0 29.9 ± 6.2 1.1 (-5.3 to 2.9) 0.575 .14 Six-minute walk test, yd/sec Baseline 77 1.08 ± 0.15 1.08 ± 0.19 .0004 (-0.78 to 0.79) 0.991 0 Post intervention 62 1.24 ± 0.28 1.11 ± 0.20 1.21 (-0.25 to 0.008) 0.067 .53 * Center for Epidemiologic Studies Depression Scale; a between-subjects t-tests were used to derive P values; b Cohen's d [ 19 ] effect size estimates ( d = .20 (small effect); d = .50 (medium effect); d = .80 (large effect)). FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.DISCUSSION:\nThe 12-week program, designed to help minimally active adults with FM increase their physical activity by working toward accumulating at least 30 minutes of self-selected moderate-intensity physical activity most days of the week, produced a 54% increase in the average number of steps taken per day. Compared to the FME control group, LPA participants significantly reduced their perceived functional deficits (that is, FIQ score) and pain. Moreover, compared to FME, the LPA participants had a greater improvement on the six-minute walk (expressed as gait speed), although this difference failed to reach statistical significance. The magnitude of the post-intervention differences, expressed as percent change from LPA to FME groups, were 18% for the FIQ score and 35% for the pain VAS score. When expressed as Cohen's d effect sizes these are indicative of medium-sized effects. Moreover, the change on the FIQ score exceeds the minimally clinically important difference of 14% recently identified [20], suggesting that increasing physical activity via LPA produces changes on perceived physical function that are of a relevant magnitude. On the other hand, the effect of LPA on the six-minute walk test was not statistically significant (although it produced a Cohen's d of .53). It is important to note that there was a smaller sample size available for this analysis which reduced statistical power. In general our results are in accord with studies investigating the effects of exercise on people with FM [7,8,21,22]. Specifically, the majority of studies suggest that exercise can produce mild-to-moderate benefits on aerobic endurance, strength, functional status, and quality of life [7,23,24]. However, because the exercise interventions investigated vary so markedly in type (for example, water aerobics, traditional aerobics, T'ai Chi, strength training), frequency, intensity, and duration it is difficult to compare results across studies. One thing seems clear from the FM exercise literature, people with FM have difficulty adhering to exercise. Indeed, in FM clinical exercise trials drop-out rates often nearly exceed 30% [for example, [8,24]] suggesting that developing exercise interventions that can be sustained is perhaps as important a goal as finding the particular interventions that produce optimal benefits. The magnitude of the effects of LPA observed in this study on perceived physical function and pain were similar to those obtained in our smaller pilot study [12]. These effects were also generally consistent with other protocols that involve low-to-moderate intensity exercise, interventions that appear to produce the greatest level of compliance in people with FM [for example, [7,8,24]]. It is important to note that even though the LPA group increased their mean daily steps by 54%, it only moved them, as defined by the pedometer-determined physical activity classifications developed by Tudor-Locke and colleagues [25], from the sedentary (<5,000 steps/day) to the low active (5,000 to 7,499 steps/day) category. Indeed, the mean steps per day at post-testing among the LPA participants were comparable to the mean daily steps observed in patients with progressive neuromuscular disease, and are significantly lower than other special populations such as diabetics, patients undergoing breast cancer treatment, and those with joint replacements [26]. This suggests that, even with the initiation of LPA, people with FM progress only to a relatively low level of physical activity. It is important to note, however, that the trajectory of the mean step count continued to rise over the 12 weeks suggesting that, had the trial continued, their physical activity may have continued to increase. It may be that people with FM require more time to eventually reach physical activity recommendations compared to persons with other chronic conditions. This study has limitations and strengths. First, to minimize attrition and control for the effects of increased attention, participants randomized to the FME group did receive a minimal intervention. Thus, we cannot determine how LPA compares with a traditional no treatment control group. Second, with the exception of BMI, the tender point count and six-minute walk test, the outcomes described herein were derived from self-report and may be influenced by a variety of factors, including those associated with enrollment in a clinical trial. Third, using pedometers to assess LPA is relatively crude and does not quantify other sorts of physical activities that participants may have engaged in such as cycling or water exercise. Fourth, we did not measure muscle strength during the trial so we are unable to determine whether LPA influences strength. Finally, we excluded persons with FM who had other co-morbid conditions such as uncontrolled hypertension or arthritis which may limit the generalizability of our findings. Strengths of this study include the randomized design, a relatively small drop-out rate (13%), the LPA group's adherence to standardized intervention protocol, and the relatively high rates of attendance to the group sessions.\n\nBODY.CONCLUSIONS:\nThe results of this study suggest that promoting increased physical activity by asking persons with FM to accumulate short bouts of activity throughout the day can markedly increase the average number of steps taken per day and produces clinically relevant reductions in perceived functional deficits and pain. However, the LPA intervention only moved the participants from the sedentary to low physical activity category. This suggests that it is essential to encourage FM patients to increase the duration of their physical activity in ways that do not compromise their ability to sustain the increased level of activity over the intermediate- and long-term.\n\nBODY.ABBREVIATIONS:\nBMI: Body Mass Index; CES-D: Center for Epidemiologic Studies Depression Scale; FIQ: Fibromyalgia Impact Questionnaire; FM: fibromyalgia; FME: Fibromyalgia Education Control Group; FSS: Fatigue Severity Scale; LPA: lifestyle physical activity; SD: standard deviation; VAS: Visual Analogue Scale.\n\nBODY.COMPETING INTERESTS:\nKevin R. Fontaine and Lora Conn declare that they have no competing interests. Daniel J. Clauw has acted as a consultant for Pfizer, Lilly, Forest Laboratories, Cypress Biosciences, Pierre Fabre, UCB, and Wyeth, and has received grant support from Pfizer, Cypress Bioscience, and Forest.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nKF conceived of the study, acquired the funding, participated in the design of the study, the delivery of the intervention, performed the statistical analysis, and drafted the manuscript. LC carried out the recruitment, enrollment, and data collection. DC participated in designing the study and assisted with the drafting of the manuscript. All authors read and approved the final manuscript.\n\n**Question:** Compared to fibromyalgia (FM) education control (FME) group - received information and support what was the result of lifestyle physical activity (LPA) - 30 minutes of self-selected moderate-intensity LPA, five to seven days per week on Fatigue severity scale?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
564
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of lifestyle physical activity on perceived symptoms and physical function in adults with fibromyalgia: results of a randomized trial\n\n ABSTRACT.INTRODUCTION:\nAlthough exercise is therapeutic for adults with fibromyalgia (FM), its symptoms often create obstacles that discourage exercise. We evaluated the effects of accumulating at least 30 minutes of self-selected lifestyle physical activity (LPA) on perceived physical function, pain, fatigue, body mass index, depression, tenderness, and the six-minute walk test in adults with FM.\n\nABSTRACT.METHODS:\nEighty-four minimally active adults with FM were randomized to either LPA or a FM education control (FME) group. LPA participants worked toward accumulating 30 minutes of self-selected moderate-intensity LPA, five to seven days per week, while the FME participants received information and support.\n\nABSTRACT.RESULTS:\nSeventy-three of the 84 participants (87%) completed the 12-week trial. The LPA group increased their average daily steps by 54%. Compared to FME, the LPA group reported significantly less perceived functional deficits (P = .032) and less pain (P = .006). There were no differences between the groups on the six-minute walk test (P = .067), fatigue, depression, body mass index, or tenderness.\n\nABSTRACT.CONCLUSIONS:\nAccumulating 30 minutes of LPA throughout the day produces clinically relevant changes in perceived physical function and pain in previously minimally active adults with FM.\n\nABSTRACT.TRIAL REGISTRATION:\nclinicaltrials.gov NCT00383084\n\nBODY.INTRODUCTION:\nFibromyalgia (FM) is a chronic, multidimensional disorder characterized by persistent, widespread body pain and tenderness [1]. FM is estimated to occur in 2% of the U.S. general population, affecting about eight times more women than men [2,3]. Symptoms associated with FM include body pain, fatigue, sleep disruption, headache, memory or concentration problems, mood disturbances, and irritable bowel syndrome [4]. FM often substantially hampers day-to-day functioning and is a primary cause of disability [5]. Even with the recent Food and Drug Administration approval of medications to treat FM, pharmacotherapy generally produces modest and inconsistent benefits on symptoms, functioning, and quality of life [6]. As such, nonpharmacologic treatments, such as exercise and cognitive-behavioral interventions, are recommended to assist people with FM to better manage the array of symptoms and functional deficits [6]. Although exercise has been shown to be beneficial [for example, [7]], the symptoms of FM often create obstacles that deter many from exercising consistently enough to derive benefits [8]. Thus, finding new ways to promote increased physical activity in persons with FM that can be sustained overtime is important. One promising approach is to ask people with FM to increase their lifestyle physical activity (LPA). LPA involves working toward meeting the U.S. Surgeon General's 1996 Physical Activity Recommendations of accumulating at least 30 minutes, above one's usual activity, of moderate-intensity physical activity five to seven days a week by integrating short bouts of activity into the day, such as increasing the amount of walking, performing more yard work, using the stairs and so on [9-11]. Although it is unclear whether a continuous 30 minute bout of physical activity is superior to accumulating smaller (10- to 15-minute) bouts of activity with regard to health outcomes, asking people with FM to accumulate small bouts of physical activity throughout the day, as opposed to being active for 30 consecutive minutes, might be less taxing and therefore easier to initiate and sustain over time. In a pilot study [12], we found that small bouts of LPA promoted a 70% increase in physical activity in FM. However, in that small study LPA did not produce significant benefits on pain, fatigue, or perceived physical function compared to controls. As part of an ongoing randomized trial designed to investigate the effects of LPA on ambulatory reports of physical activity, pain and fatigue, as well as measures of fitness, pain threshold and pain tolerance, we also collected questionnaire-based data on these variables. This paper presents the results on questionnaire-based assessments of perceived physical function, pain, fatigue and depression, as well as tenderness and aerobic endurance after 12 weeks of LPA in minimally active adults with FM.\n\nBODY.MATERIALS AND METHODS.PARTICIPANTS:\nParticipants were 92 adults (88 women and 4 men) aged 18 years or older who met American College of Rheumatology diagnostic criteria for FM [13]. The mean (SD) age of participants was 47.7 ± 10.7 years and 80% were white. The mean duration of FM was 7.5 ± 6.2 years. At enrollment, participants were not meeting the US Surgeon General's 1996 recommendation for physical activity [11] for the previous six months (that is, not engaging in either moderate-intensity physical activity for ≥ 30 minutes on ≥ five days per week or vigorous physical activity ≥ three times per week for ≥ 20 minutes each time during the previous month). Persons with acute or chronic medical conditions that could preclude active participation (for example, cancer, coronary artery disease) were excluded from the trial. We also excluded those who intended to change medications that might affect mood, those who intended to seek professional treatment for anxiety or depression during the study period, and those who were unwilling to make the required time commitment. Participants were recruited from the Johns Hopkins Arthritis Center, affiliated Johns Hopkins Rheumatology clinics, by advertisements in the Arthritis Foundation Maryland Chapter newsletter, newspaper advertisements, and via clinical trial recruitment websites, including clinicaltrials.gov. All participants completed baseline testing which included a series of questionnaires, a tender point examination, and a six-minute walk test. At baseline, participants also wore a waist-mounted pedometer (AccuSplit Eagle 1020, Livermore, CA, USA) for seven days (recalibrating it each morning) and recorded their daily step count. These data were used to calculate the mean steps per day as an estimate of physical activity. This study was approved by the Institutional Review Board of Johns Hopkins University School of Medicine, and all participants gave written informed consent.\n\nBODY.MATERIALS AND METHODS.STUDY PROCEDURES:\nParticipants were randomized via a coin flip at a 1:1 allocation ratio to each of the two groups. The group meetings for LPA and FME were held on different days to avoid contact between participants assigned to the different conditions. The interventions did not replace usual medical care and the participants had comparable durations of contact time with study staff (Table 1 summarizes the LPA and FME conditions). Table 1 Description of lifestyle physical activity (LPA) and fibromyalgia education (FME) protocols* Component LPA FME Three, two-hour FM education and support meetings NO YES Physical activity intervention delivered in six, one-hour meetings YES NO Wear pedometer and keep a physical activity log YES NO Prescribed physical activity YES NO Approximately six hours of face-to-face contact time YES YES Topics Covered During the Meetings LPA FME \"Physical Activity & FM\" Described FM (symptoms, diagnosis, treatment)/benefits of physical activity/demonstrated moderate-intensity LE/prescribed LE and self-monitoring/identified and addressed barriers to physical activity (Sessions 1 & 2) \"FM: What is it and how is it diagnosed?\" Presented general information on the symptoms of FM and how it is diagnosed; Discussion and social gathering (Session 1) \"How to Keep Moving\" Discussed progress, effect on symptoms, goal setting, problem solving, importance of self-monitoring, provided feedback, and troubleshooting (Sessions 3 & 4) \"What causes FM?\" Presented the latest information on the causes and consequences of FM; Discussion and social gathering (Session 2) \"Now It's Up To You\" Planned for setbacks & developed strategies to overcome them, set long-term goals, self-monitoring over the long-term (Sessions 5 & 6) \"Treating FM\" Discussed of medical and non-medical approaches, including exercise, to treating FM; Discussion and social gathering (Session 3) LPA Accumulate ≥ 30 minutes of self-selected physical activity five to seven days per week FME Did not alter their characteristic level of physical activity *FM, fibromyalgia; FME = fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.MATERIALS AND METHODS.LIFESTYLE PHYSICAL ACTIVITY (LPA):\nParticipants assigned to LPA attended six, 60-minute group sessions over 12 weeks. Delivered by one of the authors (KRF), the LPA protocol was identical to the one developed for our pilot study [12] and was loosely based on Active Living Every Day, a cognitive-behavioral physical activity promotion program developed by Dr. Steven Blair and colleagues at the Cooper Aerobics Center [14]. The LPA protocol addressed FM-specific challenges to becoming more physically active (that is, dealing with pain and fatigue, fear that physical activity will promote a flare) and discussed how LPA successfully addresses them. The goal of the LPA intervention was to increase moderate-intensity physical activity by helping participants find ways to accumulate short bouts of physical activity throughout the day. Participants were asked to gradually work their way up to meeting the Surgeon General's 1996 recommendation of accumulating 30 minutes, above usual activity, of moderate-intensity LPA five to seven days each week. At the first session, participants were taught how to perform their LPA at moderate-intensity (that is, you will be breathing heavily but not so heavily that you could not hold a conversation). They were also prescribed 15 minutes, above usual level, of accumulated moderate-intensity LPA five to seven days a week, and asked to increase the daily duration of LPA by five minutes each week. The five-minute increase in the daily duration of LPA was based on findings from our pilot study [12] and was well-tolerated by the majority of participants. Thus, by Week 5, most participants were accumulating 30 minutes, above their usual level, of LPA five to seven days a week. Participants were free to accumulate more than 30 minutes of LPA five to seven days per week, if desired. During subsequent sessions participants were taught self-monitoring of LPA, goal setting, dealing with symptom flares, problem solving strategies to overcome barriers to being more physically active, as well as instruction in finding new ways to integrate short bouts of LPA into their daily lives. Feedback focused on whether participants achieved the prescribed level of LPA, as well as the LPA's influence on symptoms. Participants wore the waist-mounted pedometers to record their steps each day (that is, as an assessment of adherence to LPA). Participants were shown how to use the pedometer, where to place it, and how to record their steps on a step diary form. At the end of each day they recorded their steps on a diary form and zeroed their pedometer so they could record their steps for the next day. They also kept a diary that categorized the types of LPA's they engaged in (for example, garden/outdoor activity, household activity, leisure activity). The step count data and diary entries were collected at each intervention session.\n\nBODY.MATERIALS AND METHODS.FIBROMYALGIA EDUCATION (FME):\nParticipants assigned to the FME group met monthly for three months. FME was a minimal intervention with each session divided into three components: (1) education (45 to 60 minutes), (2) question and answer (20 to 30 minutes), and (3) social support (20 to 30 minutes). Conducted by an experienced FM support group facilitator, these 90- to 120-minute sessions presented information on the symptoms, diagnosis, and treatment of FM. The rationale for FME was to provide education and to control for the effects of being enrolled in a clinical trial and receiving increased attention and social support. Moreover, by providing a minimal intervention, as opposed to a standard care control, we anticipated enhancing retention. The final session of FME presented information on exercise and physical activity, but no specific recommendations or prescription concerning exercise was given. To avoid the possibility that wearing a pedometer would increase their physical activity, FME participants only wore one for the baseline and post-testing assessments.\n\nBODY.MATERIALS AND METHODS.OUTCOMES MEASURES:\nThe following outcome measures were collected at baseline and after the 12-week intervention.\n\nBODY.MATERIALS AND METHODS.PRIMARY OUTCOME.PERCEIVED PHYSICAL FUNCTION:\nPerceived physical function was assessed using the Fibromyalgia Impact Questionnaire (FIQ) total score. The FIQ is a well-validated 10-item questionnaire that measures aspects of physical functioning in patients with FM [15]. The FIQ is scored so that higher scores are indicative of poorer functioning. Test-retest reliability ranged from .56 to .95 and construct validity relative to tender points was acceptable (rs = approximately .56) [15].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.PAIN:\nPain was assessed using a 100 mm Visual Analogue Scale (VAS) where participants rated their current level of pain, ranging from 0 (no pain) to 100 (worse pain imaginable).\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.FATIGUE:\nThe Fatigue Severity Scale (FSS) [16] was used to assess the current level of fatigue. The FSS is a nine-item questionnaire, answered on a seven-point scale, ranging from strongly agree to strongly disagree. The FSS has good internal consistency (Cronbach's alpha = .81) and correlates with VAS fatigue measures (r = .68) [16].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.DEPRESSION:\nDepression was assessed using the Center for Epidemiologic Studies Depression Scale (CES-D) [17]. The CES-D contains 20-items rated on a four-point Likert scale ranging from 0 (rarely or none of the time) to 3 (most or all of the time), and measures symptoms during the past week. The CES-D is a widely used measure of depressive symptoms and has acceptable internal consistency (.84 to .90) and validity (r = .56 with clinical rating of depression severity) [17].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.TENDERNESS:\nA digital tender point examination, at the 18 sites specified in the American College of Rheumatology FM classification criteria [13], was completed at baseline and after the intervention. Tender point counts are moderately reliable in classifying the tenderness associated with FM (kappa = .75) and inter-rater agreement on the presence of tenderness through digital examination is .51 [13].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.BODY MASS INDEX (BMI: KG/M:\nWeight and height were recorded at each assessment and these variables were used to calculate BMI, an index of body weight adjusted for height.\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.SIX-MINUTE WALK TEST:\nThe six-minute walk [18] is a measure of aerobic endurance. For this test, participants walked as far they could in six minutes on a preselected course, with the distance walked recorded. The reliability of the six-minute walk test is excellent (r = .91) and it correlates with the FIQ (r = -.49) and is sensitive to change due to exercise in distance walked (+78 m), and VO2 (+1.8 ml/kg/min) [18]. The six-minute walk test was measured at baseline and at post-testing. We expressed the results as meters per second, an index of gait speed.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND DATA ANALYSIS:\nThirty-five adults with FM per group were projected to provide a power of 80% to detect a clinically significant 20% difference between the groups on the FIQ score. Ninety-two participants were enrolled to allow for a 25% post-randomization drop out rate. Baseline data between the two groups were compared using t-tests or Chi Square tests. Changes in perceived physical function, depression, pain, tenderness, fatigue, BMI, and the six-minute walk test were compared between the LPA and FME groups using between-subjects t-tests. Because there was a significant difference between the LPA and FME groups on self-reported duration of FM (see Table 2), as a sensitivity analysis, we adjusted scores of the outcome measures for the duration of FM and replicated the analyses. We also used regression techniques to adjust the outcome measures on the basis of whether or not the participant reported any change in their ongoing FM treatments, either pharmacologic or non-pharmacologic (0 = no change, 1 = change) during the trial. Because the results did not differ as a function of these adjustments, we present the results for the unadjusted outcome variables. Although data from all subjects were analyzed regardless of whether those subjects complied with or remained in treatment, participants with missing data on a particular variable were excluded from that particular analysis. We also performed an analysis among participants who completed the 12-week trial (completers only). Cohen's d effects size estimates [19] were calculated for each difference on the outcome measures between LPA and FME. Analyses were performed using SPSS software, Version 16. Two-tailed P values of < 0.05 were used to denote statistical significance. Table 2 Baseline characteristics of the randomized participants* Characteristic Lifestyle Physical Activity (LPA) Fibromyalgia Education (FME) P value N (%) of participants 46 (55) 38 (45) Age, years 46.4 ± 11.6 49.0 ± 10.2 0.287 Female, N (%) 43 (94) 38 (100) 0.248 Self-reported race, N (%) 0.789 White 36 (78) 31 (82) Non-White 10 (22) 7 (18) Marital status, N (%) 0.519 Married or cohabitating 24 (52) 24 (63) Widowed, divorced, or separated 12 (27) 11 (29) Single 10 (22) 3 (8) Educational level, N (%) 0.454 Postgraduate 9 (20) 5 (13) College graduate 16 (34) 11 (32) Some college 11 (24) 13 (34) High school 10 (22) 8 (21) Employment status, N (%) 0.923 Employed 20 (43) 18 (47) Unemployed or Disabled 11 (24) 9 (24) Retired or Other 15 (33) 11 (29) Years since diagnosis a 5.9 ± 5.1 9.6 ± 6.8 0.007 Steps per day 3,788 ± 2,135 3,071 ± 1,810 0.139 Taking FM medications, N (%) a 31 (82) 40 (87) 0.498 Taking other medications, N (%) a 33 (87) 44 (96) 0.146 * Data for categorical variables are presented as N' and percentages; data for continuous variables are presented as means ± standard deviation. a Obtained from self-report. FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.RESULTS:\nNine participants withdrew after baseline testing but prior to randomization (see Figure 1). We randomized 46 participants into the LPA intervention and 38 into the FME group in five separate cohorts of approximately 8 to 10 per cohort at six-month intervals. (Because the FME facilitator was unavailable, one smaller cohort (N = 4) was comprised of only LPA participants). Selected baseline characteristics of the 84 participants are shown in Table 2. With the exception of duration of FM, the two groups were comparable on age, race, education, employment status, BMI, and the use of medications for FM or for other medical conditions. Figure 1Participant flow. Seventy-three of the 84 participants (87%) completed the 12-week intervention and post-testing. Drop outs were unrelated to randomized treatment assignment (P = .988) and there were no significant differences on any baseline variables between those who dropped out and those who completed post-testing. There was also no difference in the mean percentage of meetings attended by those randomized to the FME (77%) and LPA (72%) groups (P = .542). As shown in Figure 2, the LPA group significantly increased the mean number of daily steps from 3,788 ± 2,135 at baseline to 5,837 ± 1,770 at the final intervention session (P = .001). This represents a 54% increase in the mean number of daily steps over the course of the 12-week intervention. Although walking was the most common form of LPA, other popular forms of LPA included garden/outdoor activity (for example, mowing the lawn, planting flowers, pulling weeds); household activity (for example, cleaning out a closet, vacuuming, doing laundry); and sports activity (for example, cycling, swimming, field hockey). Figure 2Average steps per day (with 95% confidence interval) for the study groups. At baseline, there were no significant differences between the LPA and FME groups on FIQ, pain, fatigue, depression, number of tender points, BMI, and six-minute walk distance (see Table 3). At post-testing, compared to the FME group, the LPA group reported significant reductions in the FIQ score (P = .032; Cohen's d = .53) and in pain (P = .006; Cohen's d = .67). The difference between the LPA and FME groups on the six-minute walk test approached significance (P = .067; Cohen's d = .53). There were no significant differences between the groups on BMI, fatigue, depression, or the number of tender points. The results (data not shown) were not materially altered when the analysis was restricted to only participants who completed the entire 12-week trial (that is, completers only analysis). Table 3 Differences between lifestyle physical activity (LPA) and fibromyalgia education (FME) groups on the primary and secondary study measures a Variable Mean ± SD Mean difference between groups at baseline and at 12-weeks (95% CI) P Value Cohen's d N LPA FME FM impact questionnaire Baseline 84 67.5 ± 12.0 69.7 ± 13.4 2.2 (-3.3 to 7.8) 0.424 .17 Post intervention 73 56.7 ± 20.6 67.0 ± 18.6 10.2 (.91 to 19.6) 0.032 .53 Pain Baseline 84 54.6 ± 25.6 58.9 ± 25.0 4.3 (-7.2 to 14.9) 0.489 .17 Post intervention 73 46.3 ± 24.2 62.4 ± 24.5 16.1 (4.6 to 27.5) 0.006 .67 Fatigue severity scale Baseline 84 51.9 ± 9.3 52.3 ± 9.1 .4 (-3.6 to 4.4) 0.843 .04 Post intervention 73 50.6 ± 9.9 51.4 ± 10.1 .8 (-3.9 to 5.5) 0.727 .07 CES-D* Baseline 84 23.4 ± 8.6 24.0 ± 10 .6 (-3.8 to 4.5) 0.798 .06 Post intervention 73 21.6 ± 9.8 21.2 ± 11.3 .4 (-5.3 to 4.6) 0.888 .04 Number of tender points Baseline 84 16.2 ± 2.3 16.1 ± 3.2 .1 (-1.2 to 1.0) 0.979 .03 Post intervention 72 16.0 ± 2.3 16.8 ± 2.0 .8 (-.35 to 1.9) 0.172 .37 Body mass index (BMI) Baseline 82 31.4 ± 8.4 29.8 ± 6.2 1.6 (-4.7 to 1.7) 0.360 .22 Post intervention 60 31.0 ± 9.0 29.9 ± 6.2 1.1 (-5.3 to 2.9) 0.575 .14 Six-minute walk test, yd/sec Baseline 77 1.08 ± 0.15 1.08 ± 0.19 .0004 (-0.78 to 0.79) 0.991 0 Post intervention 62 1.24 ± 0.28 1.11 ± 0.20 1.21 (-0.25 to 0.008) 0.067 .53 * Center for Epidemiologic Studies Depression Scale; a between-subjects t-tests were used to derive P values; b Cohen's d [ 19 ] effect size estimates ( d = .20 (small effect); d = .50 (medium effect); d = .80 (large effect)). FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.DISCUSSION:\nThe 12-week program, designed to help minimally active adults with FM increase their physical activity by working toward accumulating at least 30 minutes of self-selected moderate-intensity physical activity most days of the week, produced a 54% increase in the average number of steps taken per day. Compared to the FME control group, LPA participants significantly reduced their perceived functional deficits (that is, FIQ score) and pain. Moreover, compared to FME, the LPA participants had a greater improvement on the six-minute walk (expressed as gait speed), although this difference failed to reach statistical significance. The magnitude of the post-intervention differences, expressed as percent change from LPA to FME groups, were 18% for the FIQ score and 35% for the pain VAS score. When expressed as Cohen's d effect sizes these are indicative of medium-sized effects. Moreover, the change on the FIQ score exceeds the minimally clinically important difference of 14% recently identified [20], suggesting that increasing physical activity via LPA produces changes on perceived physical function that are of a relevant magnitude. On the other hand, the effect of LPA on the six-minute walk test was not statistically significant (although it produced a Cohen's d of .53). It is important to note that there was a smaller sample size available for this analysis which reduced statistical power. In general our results are in accord with studies investigating the effects of exercise on people with FM [7,8,21,22]. Specifically, the majority of studies suggest that exercise can produce mild-to-moderate benefits on aerobic endurance, strength, functional status, and quality of life [7,23,24]. However, because the exercise interventions investigated vary so markedly in type (for example, water aerobics, traditional aerobics, T'ai Chi, strength training), frequency, intensity, and duration it is difficult to compare results across studies. One thing seems clear from the FM exercise literature, people with FM have difficulty adhering to exercise. Indeed, in FM clinical exercise trials drop-out rates often nearly exceed 30% [for example, [8,24]] suggesting that developing exercise interventions that can be sustained is perhaps as important a goal as finding the particular interventions that produce optimal benefits. The magnitude of the effects of LPA observed in this study on perceived physical function and pain were similar to those obtained in our smaller pilot study [12]. These effects were also generally consistent with other protocols that involve low-to-moderate intensity exercise, interventions that appear to produce the greatest level of compliance in people with FM [for example, [7,8,24]]. It is important to note that even though the LPA group increased their mean daily steps by 54%, it only moved them, as defined by the pedometer-determined physical activity classifications developed by Tudor-Locke and colleagues [25], from the sedentary (<5,000 steps/day) to the low active (5,000 to 7,499 steps/day) category. Indeed, the mean steps per day at post-testing among the LPA participants were comparable to the mean daily steps observed in patients with progressive neuromuscular disease, and are significantly lower than other special populations such as diabetics, patients undergoing breast cancer treatment, and those with joint replacements [26]. This suggests that, even with the initiation of LPA, people with FM progress only to a relatively low level of physical activity. It is important to note, however, that the trajectory of the mean step count continued to rise over the 12 weeks suggesting that, had the trial continued, their physical activity may have continued to increase. It may be that people with FM require more time to eventually reach physical activity recommendations compared to persons with other chronic conditions. This study has limitations and strengths. First, to minimize attrition and control for the effects of increased attention, participants randomized to the FME group did receive a minimal intervention. Thus, we cannot determine how LPA compares with a traditional no treatment control group. Second, with the exception of BMI, the tender point count and six-minute walk test, the outcomes described herein were derived from self-report and may be influenced by a variety of factors, including those associated with enrollment in a clinical trial. Third, using pedometers to assess LPA is relatively crude and does not quantify other sorts of physical activities that participants may have engaged in such as cycling or water exercise. Fourth, we did not measure muscle strength during the trial so we are unable to determine whether LPA influences strength. Finally, we excluded persons with FM who had other co-morbid conditions such as uncontrolled hypertension or arthritis which may limit the generalizability of our findings. Strengths of this study include the randomized design, a relatively small drop-out rate (13%), the LPA group's adherence to standardized intervention protocol, and the relatively high rates of attendance to the group sessions.\n\nBODY.CONCLUSIONS:\nThe results of this study suggest that promoting increased physical activity by asking persons with FM to accumulate short bouts of activity throughout the day can markedly increase the average number of steps taken per day and produces clinically relevant reductions in perceived functional deficits and pain. However, the LPA intervention only moved the participants from the sedentary to low physical activity category. This suggests that it is essential to encourage FM patients to increase the duration of their physical activity in ways that do not compromise their ability to sustain the increased level of activity over the intermediate- and long-term.\n\nBODY.ABBREVIATIONS:\nBMI: Body Mass Index; CES-D: Center for Epidemiologic Studies Depression Scale; FIQ: Fibromyalgia Impact Questionnaire; FM: fibromyalgia; FME: Fibromyalgia Education Control Group; FSS: Fatigue Severity Scale; LPA: lifestyle physical activity; SD: standard deviation; VAS: Visual Analogue Scale.\n\nBODY.COMPETING INTERESTS:\nKevin R. Fontaine and Lora Conn declare that they have no competing interests. Daniel J. Clauw has acted as a consultant for Pfizer, Lilly, Forest Laboratories, Cypress Biosciences, Pierre Fabre, UCB, and Wyeth, and has received grant support from Pfizer, Cypress Bioscience, and Forest.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nKF conceived of the study, acquired the funding, participated in the design of the study, the delivery of the intervention, performed the statistical analysis, and drafted the manuscript. LC carried out the recruitment, enrollment, and data collection. DC participated in designing the study and assisted with the drafting of the manuscript. All authors read and approved the final manuscript.\n\n**Question:** Compared to fibromyalgia (FM) education control (FME) group - received information and support what was the result of lifestyle physical activity (LPA) - 30 minutes of self-selected moderate-intensity LPA, five to seven days per week on Body mass index (BMI)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
565
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of lifestyle physical activity on perceived symptoms and physical function in adults with fibromyalgia: results of a randomized trial\n\n ABSTRACT.INTRODUCTION:\nAlthough exercise is therapeutic for adults with fibromyalgia (FM), its symptoms often create obstacles that discourage exercise. We evaluated the effects of accumulating at least 30 minutes of self-selected lifestyle physical activity (LPA) on perceived physical function, pain, fatigue, body mass index, depression, tenderness, and the six-minute walk test in adults with FM.\n\nABSTRACT.METHODS:\nEighty-four minimally active adults with FM were randomized to either LPA or a FM education control (FME) group. LPA participants worked toward accumulating 30 minutes of self-selected moderate-intensity LPA, five to seven days per week, while the FME participants received information and support.\n\nABSTRACT.RESULTS:\nSeventy-three of the 84 participants (87%) completed the 12-week trial. The LPA group increased their average daily steps by 54%. Compared to FME, the LPA group reported significantly less perceived functional deficits (P = .032) and less pain (P = .006). There were no differences between the groups on the six-minute walk test (P = .067), fatigue, depression, body mass index, or tenderness.\n\nABSTRACT.CONCLUSIONS:\nAccumulating 30 minutes of LPA throughout the day produces clinically relevant changes in perceived physical function and pain in previously minimally active adults with FM.\n\nABSTRACT.TRIAL REGISTRATION:\nclinicaltrials.gov NCT00383084\n\nBODY.INTRODUCTION:\nFibromyalgia (FM) is a chronic, multidimensional disorder characterized by persistent, widespread body pain and tenderness [1]. FM is estimated to occur in 2% of the U.S. general population, affecting about eight times more women than men [2,3]. Symptoms associated with FM include body pain, fatigue, sleep disruption, headache, memory or concentration problems, mood disturbances, and irritable bowel syndrome [4]. FM often substantially hampers day-to-day functioning and is a primary cause of disability [5]. Even with the recent Food and Drug Administration approval of medications to treat FM, pharmacotherapy generally produces modest and inconsistent benefits on symptoms, functioning, and quality of life [6]. As such, nonpharmacologic treatments, such as exercise and cognitive-behavioral interventions, are recommended to assist people with FM to better manage the array of symptoms and functional deficits [6]. Although exercise has been shown to be beneficial [for example, [7]], the symptoms of FM often create obstacles that deter many from exercising consistently enough to derive benefits [8]. Thus, finding new ways to promote increased physical activity in persons with FM that can be sustained overtime is important. One promising approach is to ask people with FM to increase their lifestyle physical activity (LPA). LPA involves working toward meeting the U.S. Surgeon General's 1996 Physical Activity Recommendations of accumulating at least 30 minutes, above one's usual activity, of moderate-intensity physical activity five to seven days a week by integrating short bouts of activity into the day, such as increasing the amount of walking, performing more yard work, using the stairs and so on [9-11]. Although it is unclear whether a continuous 30 minute bout of physical activity is superior to accumulating smaller (10- to 15-minute) bouts of activity with regard to health outcomes, asking people with FM to accumulate small bouts of physical activity throughout the day, as opposed to being active for 30 consecutive minutes, might be less taxing and therefore easier to initiate and sustain over time. In a pilot study [12], we found that small bouts of LPA promoted a 70% increase in physical activity in FM. However, in that small study LPA did not produce significant benefits on pain, fatigue, or perceived physical function compared to controls. As part of an ongoing randomized trial designed to investigate the effects of LPA on ambulatory reports of physical activity, pain and fatigue, as well as measures of fitness, pain threshold and pain tolerance, we also collected questionnaire-based data on these variables. This paper presents the results on questionnaire-based assessments of perceived physical function, pain, fatigue and depression, as well as tenderness and aerobic endurance after 12 weeks of LPA in minimally active adults with FM.\n\nBODY.MATERIALS AND METHODS.PARTICIPANTS:\nParticipants were 92 adults (88 women and 4 men) aged 18 years or older who met American College of Rheumatology diagnostic criteria for FM [13]. The mean (SD) age of participants was 47.7 ± 10.7 years and 80% were white. The mean duration of FM was 7.5 ± 6.2 years. At enrollment, participants were not meeting the US Surgeon General's 1996 recommendation for physical activity [11] for the previous six months (that is, not engaging in either moderate-intensity physical activity for ≥ 30 minutes on ≥ five days per week or vigorous physical activity ≥ three times per week for ≥ 20 minutes each time during the previous month). Persons with acute or chronic medical conditions that could preclude active participation (for example, cancer, coronary artery disease) were excluded from the trial. We also excluded those who intended to change medications that might affect mood, those who intended to seek professional treatment for anxiety or depression during the study period, and those who were unwilling to make the required time commitment. Participants were recruited from the Johns Hopkins Arthritis Center, affiliated Johns Hopkins Rheumatology clinics, by advertisements in the Arthritis Foundation Maryland Chapter newsletter, newspaper advertisements, and via clinical trial recruitment websites, including clinicaltrials.gov. All participants completed baseline testing which included a series of questionnaires, a tender point examination, and a six-minute walk test. At baseline, participants also wore a waist-mounted pedometer (AccuSplit Eagle 1020, Livermore, CA, USA) for seven days (recalibrating it each morning) and recorded their daily step count. These data were used to calculate the mean steps per day as an estimate of physical activity. This study was approved by the Institutional Review Board of Johns Hopkins University School of Medicine, and all participants gave written informed consent.\n\nBODY.MATERIALS AND METHODS.STUDY PROCEDURES:\nParticipants were randomized via a coin flip at a 1:1 allocation ratio to each of the two groups. The group meetings for LPA and FME were held on different days to avoid contact between participants assigned to the different conditions. The interventions did not replace usual medical care and the participants had comparable durations of contact time with study staff (Table 1 summarizes the LPA and FME conditions). Table 1 Description of lifestyle physical activity (LPA) and fibromyalgia education (FME) protocols* Component LPA FME Three, two-hour FM education and support meetings NO YES Physical activity intervention delivered in six, one-hour meetings YES NO Wear pedometer and keep a physical activity log YES NO Prescribed physical activity YES NO Approximately six hours of face-to-face contact time YES YES Topics Covered During the Meetings LPA FME \"Physical Activity & FM\" Described FM (symptoms, diagnosis, treatment)/benefits of physical activity/demonstrated moderate-intensity LE/prescribed LE and self-monitoring/identified and addressed barriers to physical activity (Sessions 1 & 2) \"FM: What is it and how is it diagnosed?\" Presented general information on the symptoms of FM and how it is diagnosed; Discussion and social gathering (Session 1) \"How to Keep Moving\" Discussed progress, effect on symptoms, goal setting, problem solving, importance of self-monitoring, provided feedback, and troubleshooting (Sessions 3 & 4) \"What causes FM?\" Presented the latest information on the causes and consequences of FM; Discussion and social gathering (Session 2) \"Now It's Up To You\" Planned for setbacks & developed strategies to overcome them, set long-term goals, self-monitoring over the long-term (Sessions 5 & 6) \"Treating FM\" Discussed of medical and non-medical approaches, including exercise, to treating FM; Discussion and social gathering (Session 3) LPA Accumulate ≥ 30 minutes of self-selected physical activity five to seven days per week FME Did not alter their characteristic level of physical activity *FM, fibromyalgia; FME = fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.MATERIALS AND METHODS.LIFESTYLE PHYSICAL ACTIVITY (LPA):\nParticipants assigned to LPA attended six, 60-minute group sessions over 12 weeks. Delivered by one of the authors (KRF), the LPA protocol was identical to the one developed for our pilot study [12] and was loosely based on Active Living Every Day, a cognitive-behavioral physical activity promotion program developed by Dr. Steven Blair and colleagues at the Cooper Aerobics Center [14]. The LPA protocol addressed FM-specific challenges to becoming more physically active (that is, dealing with pain and fatigue, fear that physical activity will promote a flare) and discussed how LPA successfully addresses them. The goal of the LPA intervention was to increase moderate-intensity physical activity by helping participants find ways to accumulate short bouts of physical activity throughout the day. Participants were asked to gradually work their way up to meeting the Surgeon General's 1996 recommendation of accumulating 30 minutes, above usual activity, of moderate-intensity LPA five to seven days each week. At the first session, participants were taught how to perform their LPA at moderate-intensity (that is, you will be breathing heavily but not so heavily that you could not hold a conversation). They were also prescribed 15 minutes, above usual level, of accumulated moderate-intensity LPA five to seven days a week, and asked to increase the daily duration of LPA by five minutes each week. The five-minute increase in the daily duration of LPA was based on findings from our pilot study [12] and was well-tolerated by the majority of participants. Thus, by Week 5, most participants were accumulating 30 minutes, above their usual level, of LPA five to seven days a week. Participants were free to accumulate more than 30 minutes of LPA five to seven days per week, if desired. During subsequent sessions participants were taught self-monitoring of LPA, goal setting, dealing with symptom flares, problem solving strategies to overcome barriers to being more physically active, as well as instruction in finding new ways to integrate short bouts of LPA into their daily lives. Feedback focused on whether participants achieved the prescribed level of LPA, as well as the LPA's influence on symptoms. Participants wore the waist-mounted pedometers to record their steps each day (that is, as an assessment of adherence to LPA). Participants were shown how to use the pedometer, where to place it, and how to record their steps on a step diary form. At the end of each day they recorded their steps on a diary form and zeroed their pedometer so they could record their steps for the next day. They also kept a diary that categorized the types of LPA's they engaged in (for example, garden/outdoor activity, household activity, leisure activity). The step count data and diary entries were collected at each intervention session.\n\nBODY.MATERIALS AND METHODS.FIBROMYALGIA EDUCATION (FME):\nParticipants assigned to the FME group met monthly for three months. FME was a minimal intervention with each session divided into three components: (1) education (45 to 60 minutes), (2) question and answer (20 to 30 minutes), and (3) social support (20 to 30 minutes). Conducted by an experienced FM support group facilitator, these 90- to 120-minute sessions presented information on the symptoms, diagnosis, and treatment of FM. The rationale for FME was to provide education and to control for the effects of being enrolled in a clinical trial and receiving increased attention and social support. Moreover, by providing a minimal intervention, as opposed to a standard care control, we anticipated enhancing retention. The final session of FME presented information on exercise and physical activity, but no specific recommendations or prescription concerning exercise was given. To avoid the possibility that wearing a pedometer would increase their physical activity, FME participants only wore one for the baseline and post-testing assessments.\n\nBODY.MATERIALS AND METHODS.OUTCOMES MEASURES:\nThe following outcome measures were collected at baseline and after the 12-week intervention.\n\nBODY.MATERIALS AND METHODS.PRIMARY OUTCOME.PERCEIVED PHYSICAL FUNCTION:\nPerceived physical function was assessed using the Fibromyalgia Impact Questionnaire (FIQ) total score. The FIQ is a well-validated 10-item questionnaire that measures aspects of physical functioning in patients with FM [15]. The FIQ is scored so that higher scores are indicative of poorer functioning. Test-retest reliability ranged from .56 to .95 and construct validity relative to tender points was acceptable (rs = approximately .56) [15].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.PAIN:\nPain was assessed using a 100 mm Visual Analogue Scale (VAS) where participants rated their current level of pain, ranging from 0 (no pain) to 100 (worse pain imaginable).\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.FATIGUE:\nThe Fatigue Severity Scale (FSS) [16] was used to assess the current level of fatigue. The FSS is a nine-item questionnaire, answered on a seven-point scale, ranging from strongly agree to strongly disagree. The FSS has good internal consistency (Cronbach's alpha = .81) and correlates with VAS fatigue measures (r = .68) [16].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.DEPRESSION:\nDepression was assessed using the Center for Epidemiologic Studies Depression Scale (CES-D) [17]. The CES-D contains 20-items rated on a four-point Likert scale ranging from 0 (rarely or none of the time) to 3 (most or all of the time), and measures symptoms during the past week. The CES-D is a widely used measure of depressive symptoms and has acceptable internal consistency (.84 to .90) and validity (r = .56 with clinical rating of depression severity) [17].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.TENDERNESS:\nA digital tender point examination, at the 18 sites specified in the American College of Rheumatology FM classification criteria [13], was completed at baseline and after the intervention. Tender point counts are moderately reliable in classifying the tenderness associated with FM (kappa = .75) and inter-rater agreement on the presence of tenderness through digital examination is .51 [13].\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.BODY MASS INDEX (BMI: KG/M:\nWeight and height were recorded at each assessment and these variables were used to calculate BMI, an index of body weight adjusted for height.\n\nBODY.MATERIALS AND METHODS.SECONDARY OUTCOMES.SIX-MINUTE WALK TEST:\nThe six-minute walk [18] is a measure of aerobic endurance. For this test, participants walked as far they could in six minutes on a preselected course, with the distance walked recorded. The reliability of the six-minute walk test is excellent (r = .91) and it correlates with the FIQ (r = -.49) and is sensitive to change due to exercise in distance walked (+78 m), and VO2 (+1.8 ml/kg/min) [18]. The six-minute walk test was measured at baseline and at post-testing. We expressed the results as meters per second, an index of gait speed.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND DATA ANALYSIS:\nThirty-five adults with FM per group were projected to provide a power of 80% to detect a clinically significant 20% difference between the groups on the FIQ score. Ninety-two participants were enrolled to allow for a 25% post-randomization drop out rate. Baseline data between the two groups were compared using t-tests or Chi Square tests. Changes in perceived physical function, depression, pain, tenderness, fatigue, BMI, and the six-minute walk test were compared between the LPA and FME groups using between-subjects t-tests. Because there was a significant difference between the LPA and FME groups on self-reported duration of FM (see Table 2), as a sensitivity analysis, we adjusted scores of the outcome measures for the duration of FM and replicated the analyses. We also used regression techniques to adjust the outcome measures on the basis of whether or not the participant reported any change in their ongoing FM treatments, either pharmacologic or non-pharmacologic (0 = no change, 1 = change) during the trial. Because the results did not differ as a function of these adjustments, we present the results for the unadjusted outcome variables. Although data from all subjects were analyzed regardless of whether those subjects complied with or remained in treatment, participants with missing data on a particular variable were excluded from that particular analysis. We also performed an analysis among participants who completed the 12-week trial (completers only). Cohen's d effects size estimates [19] were calculated for each difference on the outcome measures between LPA and FME. Analyses were performed using SPSS software, Version 16. Two-tailed P values of < 0.05 were used to denote statistical significance. Table 2 Baseline characteristics of the randomized participants* Characteristic Lifestyle Physical Activity (LPA) Fibromyalgia Education (FME) P value N (%) of participants 46 (55) 38 (45) Age, years 46.4 ± 11.6 49.0 ± 10.2 0.287 Female, N (%) 43 (94) 38 (100) 0.248 Self-reported race, N (%) 0.789 White 36 (78) 31 (82) Non-White 10 (22) 7 (18) Marital status, N (%) 0.519 Married or cohabitating 24 (52) 24 (63) Widowed, divorced, or separated 12 (27) 11 (29) Single 10 (22) 3 (8) Educational level, N (%) 0.454 Postgraduate 9 (20) 5 (13) College graduate 16 (34) 11 (32) Some college 11 (24) 13 (34) High school 10 (22) 8 (21) Employment status, N (%) 0.923 Employed 20 (43) 18 (47) Unemployed or Disabled 11 (24) 9 (24) Retired or Other 15 (33) 11 (29) Years since diagnosis a 5.9 ± 5.1 9.6 ± 6.8 0.007 Steps per day 3,788 ± 2,135 3,071 ± 1,810 0.139 Taking FM medications, N (%) a 31 (82) 40 (87) 0.498 Taking other medications, N (%) a 33 (87) 44 (96) 0.146 * Data for categorical variables are presented as N' and percentages; data for continuous variables are presented as means ± standard deviation. a Obtained from self-report. FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.RESULTS:\nNine participants withdrew after baseline testing but prior to randomization (see Figure 1). We randomized 46 participants into the LPA intervention and 38 into the FME group in five separate cohorts of approximately 8 to 10 per cohort at six-month intervals. (Because the FME facilitator was unavailable, one smaller cohort (N = 4) was comprised of only LPA participants). Selected baseline characteristics of the 84 participants are shown in Table 2. With the exception of duration of FM, the two groups were comparable on age, race, education, employment status, BMI, and the use of medications for FM or for other medical conditions. Figure 1Participant flow. Seventy-three of the 84 participants (87%) completed the 12-week intervention and post-testing. Drop outs were unrelated to randomized treatment assignment (P = .988) and there were no significant differences on any baseline variables between those who dropped out and those who completed post-testing. There was also no difference in the mean percentage of meetings attended by those randomized to the FME (77%) and LPA (72%) groups (P = .542). As shown in Figure 2, the LPA group significantly increased the mean number of daily steps from 3,788 ± 2,135 at baseline to 5,837 ± 1,770 at the final intervention session (P = .001). This represents a 54% increase in the mean number of daily steps over the course of the 12-week intervention. Although walking was the most common form of LPA, other popular forms of LPA included garden/outdoor activity (for example, mowing the lawn, planting flowers, pulling weeds); household activity (for example, cleaning out a closet, vacuuming, doing laundry); and sports activity (for example, cycling, swimming, field hockey). Figure 2Average steps per day (with 95% confidence interval) for the study groups. At baseline, there were no significant differences between the LPA and FME groups on FIQ, pain, fatigue, depression, number of tender points, BMI, and six-minute walk distance (see Table 3). At post-testing, compared to the FME group, the LPA group reported significant reductions in the FIQ score (P = .032; Cohen's d = .53) and in pain (P = .006; Cohen's d = .67). The difference between the LPA and FME groups on the six-minute walk test approached significance (P = .067; Cohen's d = .53). There were no significant differences between the groups on BMI, fatigue, depression, or the number of tender points. The results (data not shown) were not materially altered when the analysis was restricted to only participants who completed the entire 12-week trial (that is, completers only analysis). Table 3 Differences between lifestyle physical activity (LPA) and fibromyalgia education (FME) groups on the primary and secondary study measures a Variable Mean ± SD Mean difference between groups at baseline and at 12-weeks (95% CI) P Value Cohen's d N LPA FME FM impact questionnaire Baseline 84 67.5 ± 12.0 69.7 ± 13.4 2.2 (-3.3 to 7.8) 0.424 .17 Post intervention 73 56.7 ± 20.6 67.0 ± 18.6 10.2 (.91 to 19.6) 0.032 .53 Pain Baseline 84 54.6 ± 25.6 58.9 ± 25.0 4.3 (-7.2 to 14.9) 0.489 .17 Post intervention 73 46.3 ± 24.2 62.4 ± 24.5 16.1 (4.6 to 27.5) 0.006 .67 Fatigue severity scale Baseline 84 51.9 ± 9.3 52.3 ± 9.1 .4 (-3.6 to 4.4) 0.843 .04 Post intervention 73 50.6 ± 9.9 51.4 ± 10.1 .8 (-3.9 to 5.5) 0.727 .07 CES-D* Baseline 84 23.4 ± 8.6 24.0 ± 10 .6 (-3.8 to 4.5) 0.798 .06 Post intervention 73 21.6 ± 9.8 21.2 ± 11.3 .4 (-5.3 to 4.6) 0.888 .04 Number of tender points Baseline 84 16.2 ± 2.3 16.1 ± 3.2 .1 (-1.2 to 1.0) 0.979 .03 Post intervention 72 16.0 ± 2.3 16.8 ± 2.0 .8 (-.35 to 1.9) 0.172 .37 Body mass index (BMI) Baseline 82 31.4 ± 8.4 29.8 ± 6.2 1.6 (-4.7 to 1.7) 0.360 .22 Post intervention 60 31.0 ± 9.0 29.9 ± 6.2 1.1 (-5.3 to 2.9) 0.575 .14 Six-minute walk test, yd/sec Baseline 77 1.08 ± 0.15 1.08 ± 0.19 .0004 (-0.78 to 0.79) 0.991 0 Post intervention 62 1.24 ± 0.28 1.11 ± 0.20 1.21 (-0.25 to 0.008) 0.067 .53 * Center for Epidemiologic Studies Depression Scale; a between-subjects t-tests were used to derive P values; b Cohen's d [ 19 ] effect size estimates ( d = .20 (small effect); d = .50 (medium effect); d = .80 (large effect)). FME, fibromyalgia education; LPA, lifestyle physical activity \n\nBODY.DISCUSSION:\nThe 12-week program, designed to help minimally active adults with FM increase their physical activity by working toward accumulating at least 30 minutes of self-selected moderate-intensity physical activity most days of the week, produced a 54% increase in the average number of steps taken per day. Compared to the FME control group, LPA participants significantly reduced their perceived functional deficits (that is, FIQ score) and pain. Moreover, compared to FME, the LPA participants had a greater improvement on the six-minute walk (expressed as gait speed), although this difference failed to reach statistical significance. The magnitude of the post-intervention differences, expressed as percent change from LPA to FME groups, were 18% for the FIQ score and 35% for the pain VAS score. When expressed as Cohen's d effect sizes these are indicative of medium-sized effects. Moreover, the change on the FIQ score exceeds the minimally clinically important difference of 14% recently identified [20], suggesting that increasing physical activity via LPA produces changes on perceived physical function that are of a relevant magnitude. On the other hand, the effect of LPA on the six-minute walk test was not statistically significant (although it produced a Cohen's d of .53). It is important to note that there was a smaller sample size available for this analysis which reduced statistical power. In general our results are in accord with studies investigating the effects of exercise on people with FM [7,8,21,22]. Specifically, the majority of studies suggest that exercise can produce mild-to-moderate benefits on aerobic endurance, strength, functional status, and quality of life [7,23,24]. However, because the exercise interventions investigated vary so markedly in type (for example, water aerobics, traditional aerobics, T'ai Chi, strength training), frequency, intensity, and duration it is difficult to compare results across studies. One thing seems clear from the FM exercise literature, people with FM have difficulty adhering to exercise. Indeed, in FM clinical exercise trials drop-out rates often nearly exceed 30% [for example, [8,24]] suggesting that developing exercise interventions that can be sustained is perhaps as important a goal as finding the particular interventions that produce optimal benefits. The magnitude of the effects of LPA observed in this study on perceived physical function and pain were similar to those obtained in our smaller pilot study [12]. These effects were also generally consistent with other protocols that involve low-to-moderate intensity exercise, interventions that appear to produce the greatest level of compliance in people with FM [for example, [7,8,24]]. It is important to note that even though the LPA group increased their mean daily steps by 54%, it only moved them, as defined by the pedometer-determined physical activity classifications developed by Tudor-Locke and colleagues [25], from the sedentary (<5,000 steps/day) to the low active (5,000 to 7,499 steps/day) category. Indeed, the mean steps per day at post-testing among the LPA participants were comparable to the mean daily steps observed in patients with progressive neuromuscular disease, and are significantly lower than other special populations such as diabetics, patients undergoing breast cancer treatment, and those with joint replacements [26]. This suggests that, even with the initiation of LPA, people with FM progress only to a relatively low level of physical activity. It is important to note, however, that the trajectory of the mean step count continued to rise over the 12 weeks suggesting that, had the trial continued, their physical activity may have continued to increase. It may be that people with FM require more time to eventually reach physical activity recommendations compared to persons with other chronic conditions. This study has limitations and strengths. First, to minimize attrition and control for the effects of increased attention, participants randomized to the FME group did receive a minimal intervention. Thus, we cannot determine how LPA compares with a traditional no treatment control group. Second, with the exception of BMI, the tender point count and six-minute walk test, the outcomes described herein were derived from self-report and may be influenced by a variety of factors, including those associated with enrollment in a clinical trial. Third, using pedometers to assess LPA is relatively crude and does not quantify other sorts of physical activities that participants may have engaged in such as cycling or water exercise. Fourth, we did not measure muscle strength during the trial so we are unable to determine whether LPA influences strength. Finally, we excluded persons with FM who had other co-morbid conditions such as uncontrolled hypertension or arthritis which may limit the generalizability of our findings. Strengths of this study include the randomized design, a relatively small drop-out rate (13%), the LPA group's adherence to standardized intervention protocol, and the relatively high rates of attendance to the group sessions.\n\nBODY.CONCLUSIONS:\nThe results of this study suggest that promoting increased physical activity by asking persons with FM to accumulate short bouts of activity throughout the day can markedly increase the average number of steps taken per day and produces clinically relevant reductions in perceived functional deficits and pain. However, the LPA intervention only moved the participants from the sedentary to low physical activity category. This suggests that it is essential to encourage FM patients to increase the duration of their physical activity in ways that do not compromise their ability to sustain the increased level of activity over the intermediate- and long-term.\n\nBODY.ABBREVIATIONS:\nBMI: Body Mass Index; CES-D: Center for Epidemiologic Studies Depression Scale; FIQ: Fibromyalgia Impact Questionnaire; FM: fibromyalgia; FME: Fibromyalgia Education Control Group; FSS: Fatigue Severity Scale; LPA: lifestyle physical activity; SD: standard deviation; VAS: Visual Analogue Scale.\n\nBODY.COMPETING INTERESTS:\nKevin R. Fontaine and Lora Conn declare that they have no competing interests. Daniel J. Clauw has acted as a consultant for Pfizer, Lilly, Forest Laboratories, Cypress Biosciences, Pierre Fabre, UCB, and Wyeth, and has received grant support from Pfizer, Cypress Bioscience, and Forest.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nKF conceived of the study, acquired the funding, participated in the design of the study, the delivery of the intervention, performed the statistical analysis, and drafted the manuscript. LC carried out the recruitment, enrollment, and data collection. DC participated in designing the study and assisted with the drafting of the manuscript. All authors read and approved the final manuscript.\n\n**Question:** Compared to fibromyalgia (FM) education control (FME) group - received information and support what was the result of lifestyle physical activity (LPA) - 30 minutes of self-selected moderate-intensity LPA, five to seven days per week on Center for Epidemiologic Studies Depression Scale (CES-D)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
570
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of 3-Dimensional Lumbar Stabilization Training for Balance in Chronic Hemiplegic Stroke Patients: A Randomized Controlled Trial\n\n ABSTRACT.OBJECTIVE:\nTo investigate the effects of the newly developed Spine Balance 3D system on the balance and gait abilities of hemiplegic stroke patients.\n\nABSTRACT.METHODS:\nTwenty-eight hemiplegic patients with chronic stroke were randomly assigned to an experimental (n=14) or control group (n=14). The experimental and control groups performed balance training by using the newly developed Spine Balance 3D system and the well-known Biodex Balance System 30 minutes per day, three times a week for 7 weeks. The Berg Balance Scale (BBS), 10-m walking test (10mWT), Timed Up and Go Test (TUG), Functional Reach Test (FRT), the Korean version of the Fall Efficacy Scale-International (KFES-I), trunk muscle strength and stability were evaluated before and after 7 weeks of intervention.\n\nABSTRACT.RESULTS:\nThe 10mWT improved significantly (p=0.001) in the experimental group (using the Spine Balance 3D system) but not in the control group, and core muscle strength, which we checked using Spine Balance 3D system evaluation program, improved more in the experimental group as well. The results of the BBS, FRT, TUG, KFES-I, and Biodex Balance System evaluation program improved in both groups after 7 weeks of balance training.\n\nABSTRACT.CONCLUSION:\nWe suggest that the newly-developed Spine Balance 3D system can be a more useful therapeutic tool for gait and dynamic balance rehabilitation in hemiplegic patients than a conventional 2D-based balance training system. A large-scale randomized controlled study is needed to prove the effect of this system.\n\nBODY.INTRODUCTION:\nA decrease in balance control can be caused by various factors, including central nervous system diseases (such as stroke and traumatic brain injury), musculoskeletal disorders (such as back pain or arthritis), and vestibular diseases. Loss of balance control decreases the ability and speed of patients' gait, especially in those with hemiplegia caused by brain damage. Fear of falling in these patients reduces their activity level and makes daily activities of living difficult for these patients. Therefore, improvements in balance control and mechanical movements are important treatment goals [1]. A combination of several factors, including vestibular, visual, somatosensory, and trunk strength, enables the maintenance of postural balance. Therefore, comprehensive, objective treatment methods are needed. Previous studies have reported on the relationship between gait ability, speed, and independence of performing daily activities from multiple angles. Physicians use various treatment methods and exercises, including trampoline training [2], visual feedback training [3], and sling exercise therapy for trunk muscles [456]. Along with the active development of rehabilitation equipment, physicians use many types of equipment that are already made in Korea in treatments to improve balance, including the iBalance (CyberMedic, Iksan, Korea) and Space Balance 3D (CyberMedic) systems. However, few studies exist on the actual effectiveness of this equipment, and established criteria to evaluate functional improvements are scarce. This study aimed to investigate whether the Spine Balance three-dimensional (3D) system, which was developed to strengthen the deep muscles of the vertebra via stabilization exercise of the lumbar spine, improves postural balance in patients with hemiplegia performing 3D rotational exercises. We expected that this exercise would increase trunk muscle strength and stabilize movement of the vertebra. In addition, this study aimed to study the improvement in gait and ability to carry out daily activities while proving the positive effects of using the Spine Balance 3D system's program and several balance control tests. The results were compared to those obtained with a Biodex Balance System [7], which is widely used for balance training, to prove its effectiveness.\n\nBODY.MATERIALS AND METHODS.SUBJECTS:\nThis study required patients to meet the following inclusion criteria: (1) patients at 6 months or more after stroke, (2) unilateral stroke (infarction, hemorrhage), (3) ability to understand and perform three-step verbal instructions, and (4) ability to stand independently. Patients with severe abnormalities in their biorhythms who require inpatient treatment and those who were deemed unqualified for the study were excluded. Thirty patients who met the requirements were chosen as subjects. The subjects were randomly divided into two groups of 15 to create experimental and control groups to be respectively treated using Spine Balance 3D (CyberMedic) and Biodex Balance System (Biodex Medical Systems, Shirley, NY, USA). The random distribution of the subjects into the experimental and control groups is based on a random allocation table in the order of registration of the subjects, and the process of patient recruitment took a total of 4 months. We conducted this prospective study at the Chonbuk National University Hospital, and we notified the subjects regarding the content of the study and obtained their written consent before the experiment was conducted. The hospital's board of audit and inspection approved our study.\n\nBODY.MATERIALS AND METHODS.METHOD:\nThe Berg Balance Scale (BBS), 10-m walking test (10mWT), Timed Up and Go (TUG) and Functional Reach Tests (FRT), and trunk strength evaluated using Spine Balance 3D system's evaluation program were used as clinical indicators to evaluate the level of function before the exercise. The Korean version of the Fall Efficacy Scale-International (KFES-I) was recorded using the information from the survey. The patients performed 30-minute exercises, three times a week for 7 weeks (Fig. 1). The Spine Balance 3D system allows balance training to be performed in 3D space detached from the ground surface by using a lift system that locks the hip and ankles to stabilize the body and prevent twisting. Exercises in eight directions (front, back, left, right, and diagonal) are possible in 3D space, and the system can be tilted up to 60° in each direction. The 3-axis sensor on the trunk accurately measures the movement of the patient's trunk and identifies and displays the center of gravity on the monitor (Fig. 2A). The experimental group underwent training in eight directions using the Spine Balance 3D system, starting from a 15° tilt and increasing up to 30° depending on the patient's condition. The control group underwent training by using the training mode in the Biodex Balance System. We configured the platform to the static state, and the patients underwent posture maintenance training and center of gravity transferring postural training in eight directions after initially aligning the center of gravity with the middle line of the monitor. We did not restrict any of the groups' activities to balance training, and we advised the subjects to perform their same daily activities as before (Fig. 2B).\n\nBODY.MATERIALS AND METHODS.METHOD.ASSESSMENT METHOD:\nBBS (range, 0–56 scores) and TUG indicated the dynamic balance, Biodex Balance System's evaluation program indicated stability, Spine Balance 3D system's evaluation program indicated trunk muscle strength, and 10mWT indicated gait ability. The FRT indicated static balance, and the KFES-I measured the risk of falling. The patients performed TUG, a test that measures the time it takes a patient to rise from a chair, walk 3 m, turn around, walk back to the chair and sit down, and a stopwatch was used to measure timing. We measured the time it takes a patient to walk a distance of 10 m marked by tapelines when performing the 10mWT, and sufficient space was provided before and after the 10 m region to allow acceleration and deceleration. The forward FRT was performed by measuring the difference between the locations of the 3rd metacarpophalangeal joint of the unaffected side from a normal standing position and from a maximally forward-leaning position, and a 100 cm-long ruler was set horizontally at the acromion level to measure the difference. The patients placed both clavicles perpendicular to the long ruler lines on the wall to avoid error caused by the decrease in ipsilateral muscle strength. We separately measured the values for the affected and unaffected side in the lateral FRT. The lateral FRT measures the differences in the location of the metacarpophalangeal joints after abduction of the arm on the desired side from a natural upright position and from a laterally tilted trunk position. If the strength of the affected arm was not sufficiently strong, we measured the acromion locations instead. The Biodex Balance System and Spine Balance 3D system objectively evaluated the balance control. We used Biodex Balance System's evaluation program to conduct postural stability test with eyes open and closed, along with the limits of the stability test. We conducted the postural stability test by having the subjects maintain a relaxed standing position with eyes open and closed for 10 seconds each. We obtained three measurements from each condition to calculate the overall stability index, which reflected the stability as it approximates 0. We conducted the limits of the stability test on an easy level. We obtained three measurements to calculate the overall direction control score, which reflected a higher ability to control direction as the score decreased. Spine Balance 3D system's blind and open modes were used to evaluate the balance control. The difference between the blind mode and the open mode is defined as the ability to display the actual balance status on the screen by using the sensors attached to the back of the patient. The monitor displayed the actual balance status in the open mode whereas the screen turned black in the blind mode to restrict compensation. We increased the tilt in each of the eight directions to 15° and 30° by 2°/s and stopped for 5 seconds at the maximum tilt (15° and 30°). The subjects maintained a certain trunk position, depending on the tilt, and the results were reported as the Balance Position Ratio (BRP), expressed as the percentage of deviation from reference for the direction and tilt measured using the trunk sensor system. A score close to 100% reflects a good trunk muscle strength.\n\nBODY.MATERIALS AND METHODS.METHOD.STATISTICAL ANALYSIS:\nWe performed the statistical analyses using SPSS ver. 18.0 (SPSS Inc., Chicago, IL, USA), and analyzed the demographics of the two groups and pre-treatment evaluations by using an independent t-test. We compared the pre-treatment evaluations with post-treatment evaluations in both groups to determine whether statistical changes were present. We analyzed the measured values before and after the treatment in each group by using Wilcoxon signed-rank tests, and considered only p-values less than 0.05 to be statistically significant.\n\nBODY.RESULTS.GENERAL CHARACTERISTICS OF SUBJECTS:\nThe experimental and control groups comprised 15 subjects each. However, one patient from the experimental group failed due to the aggravation of gout in the ankle joint, and a follow-up loss in the control group occurred due to a MERS coronavirus outbreak. Fourteen subjects in each group completed the clinical experiment. Table 1 shows the distribution of sex, age, average education, duration of stroke, and hemiplegic side in both groups. The demographics in the two groups did not show significant differences. The unaffected lateral FRT showed significantly higher results in the Biodex Balance System training group; whereas the rest of the tests showed no significant differences between the two groups (Table 2).\n\nBODY.RESULTS.TREATMENT EFFECTS:\nTable 3 shows that the experimental group showed significant improvements in all tests—BBS (p=0.001), TUG (p=0.001), forward FRT (p=0.001), affected lateral FRT (p=0.001), and unaffected lateral FRT (p=0.002)—when the values before and after training were compared in each group. The control group also showed significant improvements in the same tests. However, the results for the 10mWT showed only significant improvements in the experimental group (11.38±3.0 to 10.12±2.47 seconds; p=0.01); whereas the control group did not show significant improvements. The improvement levels of the trunk strength (BPR score) for the experimental group using Spine Balance 3D system's evaluation program showed significant improvements at the 15° open mode; whereas the control group did not show any significant improvement. However, both groups showed significant improvements during the experiment under the following three conditions: 15° blind, 30° open, and 30° blind (Table 3). In addition, among the three conditions used in Biodex Balance System's evaluation program, the experimental group showed significant improvements in a postural stability test with eyes open and closed under two conditions while the control group showed significant improvements in a postural stability test under a condition with closed eyes and a limit of stability test. As such, the two groups showed similar levels of improvement in terms of stability (Table 3).\n\nBODY.DISCUSSION:\nThe goal of this study was to investigate the effectiveness of the Spine Balance 3D system in improving gait and balance in patients with hemiplegia. This study confirms that balance training using the Spine Balance 3D system effectively improves dynamic balance, static balance, and gait by improving trunk muscle strength. Treatment using the Spine Balance 3D system showed significant improvements in gait, as measure in the 10mWT, compared to that achieved using the Biodex Balance System, and the Spine Balance 3D system's evaluation program showed that improvements in trunk control ability were relatively superior than those observed for the control group. The postural stability test, conducted using Biodex Balance System's evaluation program, revealed that the two groups showed similar levels of improvement in stability. The trunk stability maintains the overall balance and regulates the technical movement of the lower limbs. Hodges and Richardson [8] report that stabilization of the spine caused by the contraction of abdominal and multifidus muscles plays a role in inducing technical movement in the lower limbs. In addition, Kim et al. [9] report that trunk-activating exercises are important because weakening of the trunk is relevant to functional performance in patients with chronic stroke. Abdominal muscle strength exercises improve gait and balance in stroke patients [10], and improvements in trunk regulation increase dynamic balance, gait speed, and symmetrical movement of the trunk during gait in stroke patients [11]. Subsequently, various core stability exercises have been introduced to strengthen the trunk [12131415], and many types of modern equipment are currently in development, aiming to improve trunk strength through gait training. However, studies evaluating the effects of such equipment are rare. A study conducted 1-minute sit-up tests in a single-leg squat position on athletes after training with the Biodex Balance System and reported a relatively significant improvement in balance compared to the control group. This study proves that treatment using the Biodex Balance System improves stability by fortifying core strength [16]. Treatment using the Biodex Balance System significantly reduces the fear of falling (FES-I score >26) among the elderly, and their knee extensor and flexor isometric strength improves as a secondary gain [17]. Therefore, core muscle and lower extremity strength are expected to increase when training using the Biodex Balance System, and improvements in balance have been confirmed. Our study showed significant improvements not only in trunk muscle strength, but also in BBS, TUG, FRT, and KFES-I. This result coincides with the results from a study by Saeys et al. [18], which reports that trunk reposition error, trunk impairment scale, and TUG results improved in the weight-shifting training group compared to those in the control group. The only difference with the study by Saeys et al. [18] is that our weight-shifting training was conducted using the Biodex Balance System. We expect that the newly-developed Spine Balance 3D systems stabilize and fortify trunk muscles by inducing symmetrical contraction and relaxation of abdominal and multifidus muscles in 3D space when the body is tilted, and studies are currently in progress to determine its effectiveness. A recent study confirmed changes in the trunk muscle strength and body composition among elderly subjects after training using the Spine Balance 3D system. The experimental group showed a significant improvement in trunk muscle strength after analyzing the alignment of the system direction and tilt with the trunk sensors by using trunk maintenance ability as a reference for trunk muscle strength [19]. Our study showed a significant improvement in trunk muscle strength in both groups after the experiment when measured using the same method. However, the control group showed significant improvements in trunk muscle strength only under three conditions while the experimental group showed improvements in trunk muscle strength under all four tested conditions, and the experimental group showed a relatively higher improvement in the trunk muscle strength. The tests conducted using Biodex Balance System's evaluation program revealed that both the experimental and the control groups showed significant improvements in two of three conditions with similar magnitude. The experimental group showed significant improvements in a postural stability test with eyes open and closed under two conditions while the control group showed significant improvements in postural stability test under a condition with closed eyes condition and in the limit of stability test. Although each system's learning effect on outcomes cannot be neglected, the treatment effect of the experimental group who used the Spine Balance 3D system for training was confirmed to be superior to that of the control group through a comparative analysis. Compared to the Biodex Balance System, where balance training is conducted horizontally while standing perpendicular to the surface, the Spine Balance 3D system might be more effective because the trunk muscle is activated to maintain the body balance against increasing gravitational force caused by the increase in tilt angle in 3D space. We conclude that both the Biodex Balance System and the Spine Balance 3D system increase core muscle strength. However, the 3D stimulation in the Spine Balance 3D system is more effective to activate the trunk muscles. The dynamic balance improves as shown in the BBS, TUG, FRT, and KFES-I, and the 10mWT, which refers to gait ability, and shows a relatively higher improvement compared to the Biodex Balance System. We expect that the Spine Balance 3D system's superior treatment improves static balance, dynamic balance, and gait in stroke patients. This study has several limitations. First, both the experimental and control groups comprise a small number of subjects of only 14 each. Second, although there were no restrictions for registration, each subject was able to walk independently for 10 m or more, and their BBS scores were over 40, which means that the subjects had relatively mild conditions. Third the long-term effects of the treatment could not be confirmed. In addition, we could not exclude the learning effect for each evaluation system. Finally, the level of the trunk muscle activity could not be directly proven. Therefore, future studies need to include more subjects, and treatment methods should also be confirmed in subjects with more severe conditions. In addition, a method that can directly quantify trunk muscle strength, such as electromyography (EMG) activity, should also be attempted, and the studies should be designed to explore whether the treatment effects are still present months after the experiment. In conclusion, the Spine Balance 3D system, a newly-developed trunk-stabilizing exercise equipment, and the Biodex Balance system were applied to patients with hemiplegia. Although significant improvements in dynamic and static balance were confirmed in both the experimental and control groups, the 10mWT showed a significant improvement only in the experimental group treated with the Spine Balance 3D system, which means that the gait ability also showed a significant improvement and that the activation of the trunk muscles was more effective when using the program of the Spine Balance 3D system. The results indicate that the Spine Balance 3D System induces the activation of trunk muscles and comprises an effective treatment to improve the balance and gait in patients with hemiplegia.\n\n**Question:** Compared to Biodex Balance System (Biodex Medical Systems, Shirley, NY, USA) what was the result of Spine Balance 3D (CyberMedic) on he Berg Balance Scale (BBS), Timed Up and Go Test (TUG), Functional Reach Test (FRT), the Korean version of the Fall Efficacy Scale-International (KFES-I) ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
575
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The effect of uphill stride manipulation on race walking gait\n\n ABSTRACT:\nStride length analysis represents an easy method for assessing race walking kinematics. However, the stride parameters emerging from such an analysis have never been used to design a training protocol aimed at increasing stride length. With this aim, we investigated the effects of stride frequency manipulation during three weeks of uphill (2%) training on stride length at iso-efficiency speed. Twelve male race walkers were randomly allocated to one of two training groups: stride frequency manipulation (RWM, n=6) and free stride frequency (RWF, n=6). Results. Kinematic parameters measured before and after the 3-week training in RWM showed increased stride length (4.54%; p<0.0001) and contact time (4.58%; p<0.001); inversely, a decreased stride frequency (4.44%; p<0.0001) and internal work (7.09%; p<0.05) were found. In RWF the effect of the training showed a decrease in stride length (1.18%; p<0.0001) and contact time (<1%; p<0.0001) with respect to baseline conditions and an increased stride frequency and internal work of 1.19% (p<0.0001). These results suggest that using slopes (2%) as RWM could help coaches to provide some training methods that would improve an athlete's performance, through increasing stride length without altering his or her race walking technique or metabolic demands.\n\nBODY.INTRODUCTION:\nOlympic race walking (RW) is an historical competition, even though only a small number of athletes participate; indeed, race walking has been investigated more often in the laboratory than during competition [1]. An increasing interest in race walking competitions can be seen in the research [2–5]. Moreover, numerous factors have been shown to influence RW performance, including metabolic demand [4, 5], stride technique [2, 3, 6] and nutritional factors [7]. Indeed, race walkers require considerable training and preparation [8]. The importance of investigating various critical aspects involved in RW competitions [2, 3], such as energy cost [9], has become evident in recent years. In particular, researchers have taken a strong interest in training studies in order to understand how energetic cost in high level race walkers can be optimized [10, 11]. Within RW races there are variations in the track terrain, in which small differences in the slope that can influence the mechanics of RW are often present [12]. During a longer competition, such as the 50 km race, top level race walkers usually take approximately 52,600 strides [6], and stride length can decrease by 4% between the start and the end of the race [3]. Therefore, increases in stride length could be advantageous, because this would allow for the same speed to have a lower stride frequency and a better mechanical efficiency [13]. According to this hypothesis, focusing on stride parameters which change the individual stride frequency [6] is desirable in training for improving the stride length, as demonstrated in marathon runners [14]. Usually, uphill RW is used to improve race walking gait [6, 13], but its chronic effects are unknown. Furthermore, it can be stated that uphill RW alone does not help to improve stride length [6]. Indeed, a recent study has shown that even at low speeds on small slopes (2%), RW energy cost increased by 0.30 J/(kg · m) [15] over the energy cost at 0%, while the stride length decreased by ∼4% at both constant and low speed with respect to the level gradient [6]. Therefore, it may be useful to perform race walking on a slope at iso-efficiency speed (decreasing the velocity) [13], without increasing the metabolic demand, as well as on the level, as demonstrated in running [16], but manipulating stride frequency. In addition, considering that during uphill RW stride length decreases [6], it is important to decrease the speed to allow race walkers to adequately modify stride parameters during uphill RW training. In the current study, we hypothesize that manipulating the stride frequency in uphill training will prove advantageous to stride length during level RW. Therefore, the aim of this study was to investigate the effects of the stride frequency manipulation (same stride length on level gradient) vs. free stride frequency during training on a slope (2%) at iso-efficiency speed [13] in national race walkers.\n\nBODY.MATERIALS AND METHODS.PARTICIPANTS:\nTwelve male race walkers (Senior's category) were selected for this study following the approval of the University Ethical Committee. All the subjects were randomly assigned to one [17] of the two groups: stride frequency manipulation (RWM= 6, age 24.8±7.4 years, height 1.77±0.08 m, weight 60±6.32 kg, body mass index (BMI) 19.14±0.72 kg · m−2) and free stride frequency (RWF= 6, age 24.2±5.7 years, height 1.78±0.05 m, weight 61.85±4.74 kg, BMI 19.44±1.16 kg · m−2). The inclusion criterion was: high skill with more than six years of training (all were ranked at the national level in their category). None of the athletes suffered from nutritional disturbances, none had any musculoskeletal injuries, and no medications or drugs expected to affect their physical performance were taken during the course of this investigation. In order to make the group homogeneous with regard to the training conditions, none of the subjects performed any strenuous endurance activity and/or resistance training outside of their normal endurance training protocol. The diet control was designed to eliminate the risk of any differences in the total consumption of proteins, carbohydrates and of saturated and unsaturated fats; all the athletes lived together and followed the recommendations of the same sports nutritionist. The TCU Institutional Review Board for the use of Human Subjects approved the details of this study, as well as all related informational and consent documentation, before any data collection was performed. After being informed of the procedures, methods, benefits and possible risks involved in the study, each subject reviewed and signed an informed consent form prior to participation in the study, in accordance with the ethical standards.\n\nBODY.MATERIALS AND METHODS.PROCEDURES:\nAll subjects were tested before (double check for the reliability of the measures) and after the training intervention in the Human Performance Laboratory. All the tests were carried out in a climate-controlled laboratory: average temperature 23.5°C (min 23°C, max 24°C), between 4:00 p.m. and 7:00 p.m. to control for circadian variation [18]. Riley et al. (2008) reported a high correlation (r=0.93) between over-ground and treadmill walking [19]. All the subjects wore RW shoes (Category A2, 135 g) and performed a standardized 15-min warm-up, consisting of RW at 9 km · h−1, in order to familiarize themselves with the treadmill [20] (Run Race Technogym Run 500, Gambettola Italy [21]). The subjects performed 5 min of dynamic muscular stretching [22] prior to performing the treadmill test for kinematic analysis. The treadmill was calibrated and checked before and after each test, according to the instructions of the manufacturer [20]. Percent grade (%) was expressed as being equal to the tangent [theta] × 100 [20]. The speed at \"0\" level (mean ± SD) for the 5-min duration was set at 1 km · h−1 less than the best mean speed performed by each participant for 10,000 m (IES0) corresponding to 12.83±0.60 km · h−1 [13]. Previous studies have found that this corresponds to ∼50% (estimate) maximal oxygen consumption (VO2max) [23] and requires an energy cost (Cw) of 5.0 J · (kg · m) −1 [9, 23]. Furthermore, according to the previous data [15] the increase in Cw as a result of a level gradient is:Cwi=0.15×slopei(%)+Cw0, where Cw0 is the Cw at a level gradient (0%). As mentioned above, oxygen consumption (VO2) is proportional to the energetic cost and velocity, so for each gradient the velocity (IESi) at which the VO2 was equal to level RW was calculated by taking VO2= [(Cw0 / (21 (J · min−1)) × ((IES0 / 0.06 (m · min−1)))], where 21 (J · min−1) and 0.06 (m · min−1) are constant values. This leads to the equation:IESi(km·h−1)=[(VO2(kJ/min/kg)×21(J/l)×0.06(m/min))/(0.15×slopei(%)+Cw0)].\n\nBODY.MATERIALS AND METHODS.KINEMATIC ANALYSES:\nPre- and post-training two-dimensional (2D) video data of the subjects' RW on the treadmill were collected using a high-speed (210 Hz) camera (Casio Exilim FH20, Japan). In accordance with previous studies [13, 14, 24], considering that the treadmill device was 50 cm high, the camera was positioned on a 1.5 m high tripod standing 6 m from the participant, and was located perpendicular to the plane of motion and the participant's sagittal plane [25] as standard calibration. The film sequences were analyzed off-line using Dartfish 5.5-Pro motion analysis software (Dartfish, Fribourg, CH). The following kinematic variables were studied: (i) contact time (ms), (ii) stride length (m), and (iii) stride frequency (Hz); 100 strides were sampled [26] within the 5-min duration of the test. Stride frequency was freely chosen within the pre and post-test training. Since the velocity of the treadmill was known, both stride length (left/right, SL) and stride frequency (SF) could be calculated. The contact time (CT) was calculated for both the left and the right foot [27]. The CT was defined and calculated as the time between initial contact (rear foot) with the ground and the last frame of contact before toe-off (forefoot). Initial contact and toe-off were visually detected. According to previous studies [6, 13, 16, 28], SF was calculated as SF= [1000/CT)]; alternatively, SL was calculated with the following equation: SL= [(speed/3.6)/SF].\n\nBODY.MATERIALS AND METHODS.INTERNAL WORK:\nWe calculated the internal work (WINT) with the Nardello equation [29]WINT=SF·v·(1+(DF·(1-DF)-1)2)·q, where SF is the stride frequency (Hz), v is the speed (m · s−1), DF is the duty factor – i.e. deflection of the duration of stride period when each foot is on the ground (%) and q the value of 0.1 referring to the inertial properties of the oscillating limbs.\n\nBODY.MATERIALS AND METHODS.TRAINING ON A SLOPE:\nSlope training was carried out on the treadmill (Run Race Technogym Run 500, Gambettola Italy) over three weeks, with two training sessions per week (Monday and Friday), in accordance with a previous study [14]. All the participants wore RW shoes (Category A2) and performed a standardized 15-min warm-up, consisting of RW at 9 km · h−1 on a treadmill at a level gradient and following 3 min of RW on a 2% slope before each set. The RWM group performed the following training procedure on the treadmill: 5 sets of 5-min RW at a 2% gradient at IES (IES2) with a manipulated individualized stride frequency. Between each set 5 min of active recovery was performed, which consisted of RW at 0% gradient and speed corresponding to IES2 minus 1.5 km · h−1. IES2 and stride frequency were 12.08±0.59 km · h−1 and 3.07±0.25 Hz, respectively. Manipulated stride frequency on a slope was predetermined. SF was calculated in order to replicate the same stride length elicited during level RW at IES, in accordance with the training equation [6]. The athletes were requested to count their strides and to replicate the number each minute (visual counter each 30 s with treadmill display, with an error of one stride). Training equation for slope= [(speed (km · h−1) / 3.6 (m · s−1)) during slope / SL (m) during level × 60 (min)] [6]. The RWF group performed the following training procedure on the treadmill: 5 sets of 5-min RW at 2% gradient at IES2 with a freely chosen stride frequency. IES2 and stride frequency were 12.13±0.58 km · h−1 and 2.92±0.15 Hz, respectively. Between each set, 5 min of active recovery was performed, which consisted of RW at 0% gradient and speed corresponding to IES2 minus 1.5 km · h−1. To balance the training load, both groups (RWM-RWF) during the training weeks (4 days per week) performed the same normal training programme on a level gradient (based on ∼14.5 km per day).\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nData are reported as mean ± SD. The kinematic variables (i) contact time, (ii) stride length, (iii) stride frequency and (iv) internal work were analysed using a separated two-way ANOVA with repeated measures and Bonferroni post-hoc tests. The effect size (η2) was calculated for all variables between pre- and post-testing. For testing the repeatability of the measure [30], we first calculated the intra-class correlation coefficient (ICC) for each variable measured. Assumption of normality was verified using the Shapiro-Wilk W. Test. The within factor was time with two levels (pre- and post-training) and the between factor was the training with at two levels (RWM and RWF). Furthermore, a t-test was used to compare pre- and post-training in different groups for: (i) contact time, (ii) stride length, and (iii) stride frequency. Then, a pair-wise comparison was performed when the main effect was significant, and the significance level was set at p<0.05. Statistical analysis was performed using SigmaPlot software 11.0 (Systat Software, Tulsa, OK).\n\nBODY.RESULTS:\nThere were no differences between the two groups at baseline conditions for age, height, weight, training experience or RW velocity on both the level and a slope (2%). The test-retest reliability of this testing procedure was demonstrated through an ICC and standard error of measurements (SEM) for the following variables: stride length (ICC: 0.97–0.98, SEM: 0.03–0.07 m), contact time (ICC: 0.96–0.98, SEM: 9–11 ms) and stride frequency (ICC: 0.95–0.98, SEM: 0.8–0.11 Hz). Repeated measures ANOVA showed significant differences between the two training groups in CT: F(1,10)= 32.856, p<0.0001 (η2= 0.767) and the interaction training type × time F(1,10)= 94.776, p<0.0001 (η2= 0.905). Significant differences were also found for SF: F(1,10)= 24.038, p<0.001 (η2= 0.706) and the interaction training type × time F(1,10)= 71.022, p<0.0001 (η2= 0.877). Similarly, significant differences were found for the SL: F(1,10)= 40.941, p<0.0001 (η2= 0.804) and the interaction training type × time F(1,10)= 120.115, p<0.0001 (η2= 0.923). WINT showed significant differences: F(1,10)= 5.784, p=0.037 (η2= 0.584) and the interaction training type × time F(1,10)= 11.372, p=0.007 (η2= 0.859). With regard to the two training strategies, within the RWM group the CT significantly increased by 4.58% (p<0.001), as did SL (4.54% with p<0.0001) between pre- and post-training (Table 1). SF and WINT (Figure 1) also significantly decreased by 4.44% and 7.09% (p<0.0001 with p=0.03), respectively, within the RWM. Conversely, in RWF the SL and CT decreased by 1.18% with p<0.0001 and <1% with p<0.0001, respectively, between pre- and post-training, while the SF and WINT increased (1.19% with p<0.0001) respectively. FIG. 1Changes in the WINT variable (means and SE) for the stride frequency manipulation (RWM) and free stride frequency groups (RWF) depending on the time (pre- and post-training). \"†\" represents significant (p<0.0001) training × time interaction; \"*\" p<0.05 and \"**\" p<0.01 represent significant differences between pre- and post-training. TABLE 1 Effects of uphill race walking training on step analysis. BASELINE RWM RWF Contact time (ms) 311 ± 16.89 324 ± 22.96* 308 ± 7.56† Step frequency (Hz) 3.23 ± 0.18 3.10 ± 0.23† 3.25 ± 0.08† Stride length (m) 1.11 ± 0.09 1.15 ± 0.12† 1.10 ± 0.07† Note: Kinematic variables in the step frequency manipulation group (RWM) and the free step frequency group (RWF) between post-training and the baseline conditions with “*” p<0.001, or “†” p<0.0001. \n\nBODY.DISCUSSION:\nThe current study shows for the first time the effect of stride frequency manipulation on kinematic parameters during uphill training in national race walkers. The study investigated the effects of two types of uphill RW training – RWM and RWF. The uphill training aimed to reproduce the same stride length that was elicited during RW on a level gradient. Uphill RW speed was therefore slower compared to level RW, 12.11±0.56 km · h−1 and 12.83±0.59 km · h−1, respectively. The previous research that carried out RW training programmes used greater slopes, which produced higher speeds [6]. Such conditions would produce unnatural conditions for race walkers [31, 32]. This would presumably increase the energy cost of RW [15, 33] and lead to improvements in the cardiovascular system. Within the current study, the aim was to look at kinematic parameters and maintain a similar metabolic cost; due to this, IES was used [13]. In the present study the reduction in speed [13] allowed for effective stride frequency manipulation. Research suggests that race walkers often unconsciously select a stride frequency which minimizes injury and energy expenditure [8, 23] and is modulated by the central pattern generators [34], as in running [24]. Due to this notion, race walkers within the RWM group found it problematic during the first training session to alter their stride pattern. The RWF group, however, did not have such issues, as stride frequency was freely chosen. The results following six uphill training sessions (over 3 weeks) suggest that the RWM group elicited kinematic improvements. Conversely, the RWF group did not show any significant kinematic improvements. Some researchers suggest that successful running race performances are characterized by increased stride length, increased cadence becomes more important at greater speeds [35, 36] and contact times are shorter. Our results support this approach, and following training the RWM group increased SL by 4.54% with p<0.0001 related to decreased WINT (7.05% p<0.05; Figure 1). During training the RWM group subjects were required to reduce their mechanical pattern in order to reduce their stride frequency (∼5%). Changes in CT were also evident: RWM increased by 4.58% (p<0.001), but only a <1% decrease was found for the RWF group. The decreases in CT may have produced greater peak vertical force [37], thus providing longer stride lengths. This was supported within the RWM group in the current study. The kinematic adaptations elicited by the RWM group were characteristic of better race walkers, as energy efficient performances are. Longer contact times and smaller stride frequencies are associated with energy efficiency [13, 38]. A particular limitation of this study, concerning the low sample size, can be explained considering the kind of discipline involved: in the last 30 years the mean number of subjects studied in the laboratory was very low (male, n=7±4.5; female n=4±2.7 [1]), due to the specificity of the RW technique.\n\nBODY.CONCLUSIONS:\nIn conclusion, this study showed how the stride frequency manipulation during uphill RW training alters subsequent level RW kinematics, and presumably aids efficient RW performance. The increase in SL could provide an advantageous alteration to kinematics. Because of the direct positive relationship between SL and speed, increasing SL could significantly increase the speed (which is the race walker's aim), assuming that the foot does not advance too far ahead of the centre of mass, providing a braking effect [39]. Considering that the RW race (50 km) record is 3 h 32 min 33 s [40], and the stride length decrease is 4% [3], this study could be useful in closing this gap with the training method of increasing athletes' stride length, leading to the first time an athlete could complete a race under 3 h 30 min The same training protocol could also be applied for a middle distance race (20 km), where a decrease of stride length by 2.4 and 2.7% was observed in males and females respectively between the last and first kilometres [12]. An uphill race walking protocol (changing stride frequency) could be useful to diversify athletes' common training protocol without stressing their current metabolic demand. In addition, given that several physiological systems are involved during RW, the possible effect of combining different training strategies, shown to be effective in isolation, warrants future studies.\n\n**Question:** Compared to free stride frequency what was the result of stride frequency manipulation on stride length?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
579
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Are CT Scans a Satisfactory Substitute for the Follow-Up of RSA Migration Studies of Uncemented Cups? A Comparison of RSA Double Examinations and CT Datasets of 46 Total Hip Arthroplasties\n\n ABSTRACT:\nAs part of the 14-year follow-up of a prospectively randomized radiostereometry (RSA) study on uncemented cup fixation, two pairs of stereo radiographs and a CT scan of 46 hips were compared. Tantalum beads, inserted during the primary operation, were detected in the CT volume and the stereo radiographs and used to produce datasets of 3D coordinates. The limit of agreement between the combined CT and RSA datasets was calculated in the same way as the precision of the double RSA examination. The precision of RSA corresponding to the 99% confidence interval was 1.36°, 1.36°, and 0.60° for X-, Y-, and Z-rotation and 0.40, 0.17, and 0.37 mm for X-, Y-, and Z-translation. The limit of agreement between CT and RSA was 1.51°, 2.17°, and 1.05° for rotation and 0.59, 0.56, and 0.74 mm for translation. The differences between CT and RSA are close to the described normal 99% confidence interval for precision in RSA: 0.3° to 2° for rotation and 0.15 to 0.6 mm for translation. We conclude that measurements using CT and RSA are comparable and that CT can be used for migration studies for longitudinal evaluations of patients with RSA markers.\n\nBODY.1. INTRODUCTION:\nReliable measurement of implant migration enables early detection of an inferiorly performing prosthesis as well as identifying patients in whom aseptic loosening may occur sooner than expected [1, 2]. Radiostereometry (RSA), introduced in 1974 [3], is considered the gold standard for evaluation of implant migration in vivo [4, 5]. In many patients (roughly 10,000 in Sweden alone [6]) tantalum beads were implanted into the bone and attached to or inserted into the prosthesis for assessment of joint replacements with RSA. However, RSA requires calibration cages, radiology facilities with two X-ray machines for simultaneous acquisition of two X-ray images, specialized software, and trained personnel [7]. The number of hospitals capable of this analysis is limited; thus alternative techniques, for following these patients over time, such as three-dimensional (3D) computed tomography (CT), should be considered. Today, CT-scanners are found in almost every hospital and these scanners make it possible to visualize the small tantalum beads (RSA markers) in CT volumes with a clinically acceptable accuracy of 1 mm or less [8–10]. Additionally, the effective radiation dose of CT scans is constantly being lowered and the quality of the scans with regard to metal artifacts is improving [11]. If 3D CT is comparable in precision to RSA, then CT could replace RSA, thus greatly expanding the cohort of patients who could be followed. Over the last decade, our group has refined and validated a 3D volumetric data processing tool [12–15], which exploits cohomologous points (landmarks) to enable volume fusion. This tool has been used with 3D CT volumes for studying the position, wear, and migration of orthopedic implants over time. This report explores the possibility of utilizing CT scans of patients with tantalum beads in the acetabulum and orthopedic hip implants, by directly comparing CT derived data to RSA data acquired on the same day. Specifically, we evaluate how similar a dataset of 3D bead positions derived from routine CT scans of patients is to a RSA dataset based upon these same RSA markers. If sufficiently similar, then CT could be used to follow patients from RSA migration studies, for example, to evaluate new prostheses [10].\n\nBODY.2. MATERIALS AND METHODS.2.1. PATIENTS:\nThe data for this study resulted from a 14-year follow-up of total hip arthroplasty [21] whose purpose was to investigate the influence of different cup augmentation on cup stability and liner wear. Sixty-three patients (68 hips: 31 men, median age (range) 56 (36–66) years, median weight (range) 75 (51–102) kg) with primary and secondary hip osteoarthritis (OA) underwent total hip arthroplasty between 1995 and 1997 at the Department of Orthopedics, Umeå University Hospital. Fifty-two hips had primary and 16 had secondary OA with 37 hips in Charnley class A (unilateral OA) and 31 hips in class B (bilateral OA) [22]. A hemispherical porous-coated titanium alloy cup with ethylene oxide sterilized polyethylene liner (Reflection, Smith & Nephew, Memphis, TN) was used in all patients. During surgery, five to nine 0.8 mm tantalum beads were inserted into both the periphery of the polyethylene liner and the acetabular bone as RSA markers. At the 5-year follow-up [23] a high amount of wear of the polyethylene liner, which increases the risk of osteolysis development over time [24], was evident. Therefore, a CT scan was added for the 14-year follow-up to investigate the amount of osteolysis around the cup. This enabled comparing the 3D RSA marker positions (landmarks) derived from CT with those provided by RSA. The institutional review board approved the study (the Central Ethical Review Board of Umeå University Dnr 2010-100-31) and all patients gave their written consent. At the 14-year follow-up 48 hips in 45 patients remained and were included in this study. Reasons for not attending the follow-up were death (8 patients, 9 hips), revision (5 patients, 6 hips), or refusal to attend (5 patients, 5 hips). The median (range) follow-up time was 14 (13–15) years. Clinical outcome and results of the migration measurements were published in a separate paper [21]. Each patient had a CT scan and 2–4 stereo radiograph pair examinations (resulting in 4–8 individual X-rays) on the same day.\n\nBODY.2. MATERIALS AND METHODS.2.2. CT SCANS:\nCT scans (LightSpeed VCT General Electric Medical Systems, Milwaukee, WI) were acquired from spina iliaca anterior superior to a point at least two centimeters distal to the tip of the femoral component with scan parameters: data collection diameter: 500 mm, reconstruction diameter: 400 mm, source to detector distance: 949.08 mm, source to patient distance: 541 mm, peak kilovoltage (kVp): 120 kV, and exposure time: 800 ms. A bone reconstruction algorithm was used resulting in 225 to 366 slices at 0.625 mm increments with a matrix size of 512 × 512 and a pixel resolution of 0.78 × 0.78 mm in the axial (xy)-plane. The median (range) effective radiation dose was 6.9 (3.2–21.1) millisievert (mSv) [25]. To facilitate analysis of both CT and RSA derived data, the CT volume for each patient was duplicated and given a new name.\n\nBODY.2. MATERIALS AND METHODS.2.3. RADIOSTEREOMETRY (RSA):\nA ceiling stand (Aristos VX, Siemens, Munich, Germany) was combined with a mobile digital X-ray unit (Mobilett, Siemens, Munich, Germany), both having a resolution of 10 pixels per mm to obtain two simultaneous exposures of the hip joint. Patients were placed supine on the examination table with the planes of the two X-ray tubes angled by approximately 40° to each other combined with a uniplanar calibration cage No. 43 (RSA Biomedical, Umeå, Sweden). Two image pairs (a double examination: 4 X-ray images) were obtained with slightly different patient positions. In some cases an additional one or two extra image pairs were needed until two image pairs with optimal visualization of the tantalum markers on both foci were achieved. The median (range) effective radiation dose per complete examination, calculated using PCXMC version 1.0 [26], was 1.9 (1.9–3.8) mSv for a fully digitized acquisition at 120 kV and 150 mAs.\n\nBODY.2. MATERIALS AND METHODS.2.4. RSA PROCEDURE:\nThe radiographs were analyzed using the UmRSA software 6.0 (RSA Biomedical AB, Umeå, Sweden) by a skilled operator with many years' experience. All tantalum beads (markers) that could be identified on both focuses 1 and 2 images were given a unique marker number. The 3D coordinates (in the standardized RSA cage coordinate system) for the markers were calculated, where the medial-lateral position is described on the x-axis, distal-proximal on the y-axis, and posterior-anterior on the z-axis [27]. The following data were saved for later analysis: focuses 1 and 2 stereo radiograph images (each showing the markers in the periacetabular skeleton and in the polyethylene liner with corresponding marker numbers) and a dataset consisting of marker number and 3D coordinates for this marker. The precision of the current RSA setup was calculated by double examinations of all 48 hips. Markers in both the acetabulum and the rim of the liner were accepted when the mean error of rigid body fitting was ≤0.25 mm with condition number (reflecting how well spread the markers are in 3D) < 90, or ≤0.35 mm with condition number <80, or ≤0.2 mm with condition number ≤130. Markers that did not meet these criteria were automatically excluded assuming instability of the marker or incorrect marking, resulting in 46 RSA double examinations (46 hips, 43 patients) for analysis.\n\nBODY.2. MATERIALS AND METHODS.2.5. DATA PROCESSING:\nThe RSA coordinate system is determined by the reference cage, where normally for the first set of postoperative radiographs at least one set is acquired with the patient carefully oriented relative to the external reference cage [18]. In contrast, the orientation of the patient during the CT scan is unimportant because the volume can be freely rotated in 3D [15]. By designating landmarks in the CT volume (i.e., defining a landmark's 3D coordinates) and transforming the coordinates of the RSA markers from the RSA coordinate system into the CT coordinate system, we were able to treat the RSA dataset and the CT dataset as a double examination; hence we compared the limit of agreement (precision) of this calculation with that from a RSA double examination. For calculating the limit of agreement always the first of the two RSA image pairs was used. The analysis of the CT data to produce the 3D CT datasets and the comparison of these datasets with the RSA datasets were both performed using a previously described and validated 3D volume fusion (spatial registration) tool [12, 28, 29]. This semiautomated tool provides landmark-based fusion of two volumes, registering a \"target\" volume to a \"reference\" volume by exploiting cohomologous landmarks in each volume, via a variety of 3D transform modules, ranging from simple rigid body to 3D warping and including user-defined transformations. A technical description can be found in earlier publications [12, 14, 30]. Moreover, use of the CT volumes gives us the opportunity to work directly in three dimensions, while tracking each step graphically.\n\nBODY.2. MATERIALS AND METHODS.2.6. CT-BASED PROCEDURE:\nFirst the tantalum beads were visualized in the CT volume by setting the isosurface level such that voxels attenuating less than metal were not visible. Then, in order to assign landmark numbers consistent with the RSA marker numbers, in a separate display the RSA focuses 1 and 2 stereo radiograph images showing the tantalum beads with their marker numbers were displayed. The CT volumes were manually rotated in 3D so they were visually comparable with the stereo radiographs. Using the computer's pointing device, the tantalum beads in the bone were designated and assigned the same marker numbers as in RSA (Figure 1). When a bead was designated, the fusion tool automatically found a best fit center and recorded this as a 3D landmark (see in particular Figure 2 in [10] which shows that even in the presence of partial volume effects the landmark is visually embedded centrally in the volume of the bead). This created a set of \"CT-based bone landmarks\" in the CT data coordinate system. To utilize the RSA dataset, the fusion tool mapped the RSA cage coordinate system into the CT data coordinate system. This transformation process was previously described and validated to have a median (range) translation error of 0.22 (0.07–0.52) mm, a rotation error of 0.003 (0.001–0.006)°, and a goodness of fit of 0.0010 (0.0001–0.022) per bead [9]. We then imported and transformed the RSA bone marker dataset into a set of \"RSA-based bone landmarks\" in the CT coordinate system. This produced two CT data volumes: one containing bone landmarks generated from 3D CT analysis and the other containing bone landmarks generated from RSA (transformed into the CT coordinate system). The CT volume containing the \"RSA-based bone landmarks\" was used as the \"reference\" volume, while the CT volume containing the \"CT-based bone landmarks\" was used as the \"target\" volume. The fusion tool then registered the target volume with the reference volume using corresponding landmarks. The result was a rigid body transformation that registers the two CT volumes based upon the entire set of \"CT-based bone landmarks\" and \"RSA-based bone landmarks\" using singular value decomposition (SVD) [31]. The distances between each of the original landmarks of the reference volume and the new (transformed) landmarks of the target volume were automatically generated. Because the \"RSA-based bone landmarks\" used were derived from the original RSA dataset before the automated process where the RSA software deletes misplaced points or performs any \"trial and error\" interchanging of marker numbers, the set contains both correct points and incorrect points (i.e., those where the coordinates of a marker were derived from two different markers or from a marker and background noise, such as the femoral head or the acetabular shell). Corresponding landmark pairs that were not well matched (i.e., those whose distance is different from the corresponding landmark by ≥2.0 mm) were manually interchanged or excluded by the operator until a reasonable (i.e., the maximum 3D distance error of any landmark pair was ≤2.0 mm) registration between the two volumes based on \"bone landmark\" sets was established. The landmark selection procedure was then repeated for the \"prosthesis\" landmarks instead of the \"bone\" landmarks. This resulted in a corresponding set of \"RSA-based prosthetic landmarks\" in the \"reference\" volume and \"CT-based prosthetic landmarks\" in the \"target\" volume. A rigid body transformation that registered the entire set of \"CT-based prosthetic landmarks\" to the entire set of \"RSA prosthetic landmarks\" was then performed, followed by the verification of the landmark selection procedure. In summary, 6 steps are necessary to calculate the error of the cup position in relation to the acetabulum bone between CT and RSA: (1) assigning bone and prosthetic markers in CT consistent with the marker numbers in RSA, (2) importing landmark coordinates from RSA into a copy of the CT volume, (3) registering the target volume (CT) to the reference volume (RSA) based on the bone landmark sets using a rigid body transformation, (4) transforming the target prosthetic landmark set into the registered coordinate system, (5) rotating and translating both volumes and associated prosthetic landmarks to a standard orientation, and finally (6) computing the rigid body transformation that would move the resulting target prosthetic landmark sets into spatial alignment with the reference set [30]. Steps (1) and (2) were done manually, while steps (3) to (6) were performed automatically. The rotation point for the last rigid body transformation is by default the centroid (the geometric weight-point) of the prosthetic reference landmark set. Using the SVD registration method, the program computes this transformation as a 3 × 3 rotation matrix and a 1 × 3 translation matrix in the CT coordinate system. The main outputs from this process are (1) visual 2D and 3D images of the two volumes after registration of the bone landmark set and (2) numerical data indicating the movement of the prosthesis in six degrees of freedom (rotations about and translations along the x-, y-, and z-axes, where the translations are given for each individual prosthetic landmark with respect to the centroid of the prosthetic landmarks). The error of the measurement is the last registration movement between the rigid body from RSA dataset and CT dataset. The rotation errors are generated by decomposing the rotation matrix into Euler angles, which, although not unique, are computed as a clockwise rotation first around the x-, then the y-, and finally the z-axes as defined by the standardized RSA in the CT coordinate system. Numerical information, based on the distance between the original landmarks of the stationary body and the new (transformed) locations of the moved body for each landmark, for both the bone and the prosthetic landmarks is automatically generated. These differences are computed for each orthogonal direction, for the axial (xy)-plane and for the volume. If the registrations were perfect, these differences would be zero. For a more detailed description of this method, see the previous study [30].\n\nBODY.2. MATERIALS AND METHODS.2.7. EVALUATION OF ERRORS:\nThe error in the bone and prosthetic landmark sets was expressed as the differences between the RSA and the CT-based coordinates of the markers. In the 3D view of the CT volume, it is easy to determine if the marking of the tantalum beads from RSA is correct. Hence, in contrast to the analysis of the RSA double examinations, we did not set a maximum value for the mean error but rather simply excluded those markers that, on visual inspection, could be seen to be incorrectly marked or not marked at all in the RSA setup. The relative movement of the prosthetic landmark group after the final transformation was expressed in 6 degrees of freedom (Euler angles and translation distances). All of the prostheses were considered to be stable in the bone [21]; therefore, the movement of the prosthesis landmarks between RSA and CT should ideally be zero. The limit of agreement was calculated as mean and median of the differences for both rotation and translation plus and minus two standard deviations [32]. This was acceptable as the data was determined to nearly follow a normal (Gaussian) distribution using histograms, box, density, and quantile-quantile plots [33]. The mean differences ± 1.96 ∗ standard deviation is the 95% confidence interval [34]. Additionally, we calculate the maximum total point motion (MTPM), that is, the length of the 3D vector of the marker that moved the most. Since calculation of the limit of agreement has been performed in several different ways in other studies, we also calculated the results as mean ± 2.01 (t statistic for 46 degrees of freedom) ∗ standard deviation giving the 95% quantile for 46 double examinations [35]. A third way of presenting this data has been frequently used in earlier RSA studies [5, 23, 36, 37] with the 95% confidence limit expressed as the mean of the absolute differences plus 1.96 ∗ standard deviation of the differences (and plus 2.575 ∗ standard deviation for the 99% confidence limit). The density plots shown, which express the frequency distribution between the first and third quantiles of the date, were calculated using Gaussian filter. The statistical program R version 3.1 was used [33].\n\nBODY.3. RESULTS:\nAll markers designated on the radiographs could be visualized in the CT volumes. Further, a consistent finding was that additional markers were clearly identified in the CT volumes, which had not been designated on the stereo radiographs. Only markers that could be clearly identified on both focuses 1 and 2 of the stereo radiographs were used in the analysis. Of the excluded markers, one prosthetic marker that had been correctly designated on radiographs was excluded due to bad marking on CT. This marker was located less than one mm from the cup shell, and information from the cup interacted in the automated process, resulting in off-centering of the landmark. The remaining excluded markers were excluded due to being incorrectly marked on the radiographs. All 48 hips could be registered with respect to the tantalum beads in the acetabulum with a median (range) of eight (4–9) landmarks. After registration, a majority of the landmarks showed an overlapping pattern, with the center of all RSA derived landmarks placed inside the volume of the tantalum bead representation in the 3D CT volumes (Figure 1). In two patients, there were only two valid landmarks from the stereographs in the cup, leaving 46 hips for registering the cups and computing relative movement. The remaining cup landmark sets had a median (range) of five (3–9) landmarks. Figures 2(a)–2(c) show the process for identifying markers that had been inappropriately designated as being the same. The errors of individual markers after rigid body registration for both bone and prosthetic landmarks are given in Table 1 both for the RSA double examinations and between CT and RSA. For the RSA double examinations all markers with a mean error above 0.35 mm were excluded in the UmRSA software. Density plots for the landmark errors are given in Figure 3. The data were close to normal. Note that there are some outliers. A subanalysis of landmarks with large errors showed that several of these were \"double balls,\" that is, two tantalum beads simultaneously in close proximity. The RSA images separated these quite effectively, but our automated routine tended to push the two CT designated landmarks together due to interference. This combination gives a pair of landmarks with somewhat higher error after rigid body registration, but the combined effect on the evaluation of relative movement is theoretically negligible, since these errors tend to eliminate each other. Analysis of relative movement, presenting the precision of our setup, is given in Tables 2(a)–2(c). Figure 4 shows the density plots for errors in the rotations and translations.\n\nBODY.4. DISCUSSION:\nFrom the user's standpoint, marking the volumes was easier and faster when using CT than RSA. In addition, in contrast to radiographs, where markers are often shadowed by the prosthesis, we could consistently find all markers in the CT volumes. For this study, we performed the analysis of component movement directly in the program developed for the CT volumes. Theoretically, it would also be possible to reverse the registration process between CT and RSA derived dataset, then importing the CT coordinate dataset into the existing RSA analysis software and performing the subsequent analysis using that software. A key feature of the UmRSA software is its calculation of mean error of rigid body fitting (ME). This reflects the relative motion of individual markers in each segment and the error of digitization between two different examinations and thus is a measure of the stability of the rigid body. In clinical RSA trials, ME values of 0.10–0.25 mm are typical [5] when using commercially available software (e.g., UmRSA 6.0, RSA Biomedical, Umeå, Sweden). Increasing ME decreased the precision of RSA in a laboratory study [38]. At the start of a normal RSA analysis, the user defines an upper limit for acceptable individual landmark error; then points outside of this limit are automatically excluded during subsequent calculations. In this study, we transformed the CT volumes into the RSA derived landmarks and then manually excluded points that were obviously misplaced during the marking of the radiographs (Figure 2). Therefore, the mean errors found in this study differ from the standard ME values, because all points that are accepted as being cohomologous are included. In addition, the automated routine in the program that finds the centroid of the tantalum beads in the CT dataset is not optimized for beads in close vicinity to another metal object, as was the case for double markers and markers less than a millimeter from the cup shell. For bone markers or markers placed in the liner this is rarely a problem. When markers are attached directly to the metal of the prosthesis it might be an issue. This can be addressed by changing parameters in the program or by manual designation of the marker's centroid [9]. However, in this study the aim was to evaluate how similar a RSA dataset and a dataset derived from routine CT scans of patients would be, when essentially everything but designating the marker sequence was calculated by the program. Slightly higher MEs were expected and also that these errors would be generated from both input modalities. The MEs of rigid body registration between the RSA and CT volumes in this study are of the same order of magnitude as ME for clinical RSA trials, indicating that, from a standpoint of quality of input data, it would be feasible to continue monitoring patients with tantalum beads. It might be possible to start new migration studies using CT if one lacks a facility for RSA radiographs [16, 29]. As stated in the introduction, the effective radiation dose of CT scans is constantly being lowered and the quality of the scans with regard to metal artifacts is improving [11]. The precision of RSA in a clinical setting, based on double examinations, has been shown to vary between 0.15 and 0.60 mm for translations and between 0.3 and 2° for rotations at a 99% confidence limit [5]. The agreement between the current CT and RSA datasets presented in this study as determined by multiple previously published methods is shown in Tables 2(a)–2(c). From the standpoint of precision, treating the comparison between RSA and CT as a double examination, we find that this is comparable with RSA double examinations (Table 2(a)). Many factors influence the ability to perform RSA analyses in a negative way. However, with CT several of these factors can be avoided, such as the loss of markers because of shadowing by the implant, the varying position of the patient in relation to the cage, and the mismatching of markers in the different focal planes. In the CT volume it was possible to identify more markers than were found in the pairs of RSA examination. Unfortunately, for this study we could not exploit this advantage because all markers not found via RSA were excluded. However, using all the available markers might give a more stable rigid body in the segments (bone or prosthesis) when following patients over time with CT. To achieve the results reported here, using a standard bone reconstruction algorithm for the CT scan was sufficient. This opens the possibility of measuring migration in a precise way when CT examinations are performed because of other reasons on patients with implanted tantalum beads, especially if it is impossible to perform RSA because the patient has moved to a place where RSA acquisition is unavailable. This CT derived data could also potentially be used to augment the data available for long term studies of patients by providing datasets at different times between the scheduled n-year follow-up periods, thus providing data for patients who perhaps do not complete the full study. From a standpoint of radiation exposure, the CT examinations used in this study did expose the patient to higher radiation levels than the RSA examinations. However, these CT volumes were primarily obtained to analyze osteolysis around the cup and were not optimized in any way for simply locating the tantalum beads. Several studies on the extremities and of the glenohumeral joint show that large acceptable effective radiation dose reductions can be obtained when imaging tantalum beads using CT without losing image quality [17, 19, 20]. A decade ago, Gurung et al. used a 16-row CT to study effective dose reduction in pelvic scans and showed that they could acquire scans at an effective radiation dose of 2.2 mSv which allowed valuation of the acetabulum and the iliosacral joints according to specific detailed criteria [39]. There is, to our knowledge, no study on dose reduction in patients when imaging tantalum beads in the hip. Theoretically, reasonable imaging of the tantalum beads in patients should be attainable at significant dose reductions, as has been shown in the shoulder and extremities. In a phantom study, where the precision and accuracy of the CT method is evaluated using double RSA examinations and double CT examinations, we have performed scans that, if applied to a patient, would give an effective dose of 0.4 mSv. In another study, this dose is further reduced in a porcine model [11]. Results from these two studies indicate that, under laboratory conditions, the effective radiation dose can be significantly reduced without significant loss of precision and CT's precision is comparable to RSA. In previous publications, we showed that RSA data could be retrospectively registered to and visualized in CT volumes [9, 40] and can be used to follow patients with implanted tantalum beads [10]. In this study we demonstrate that when RSA data of clinical patients is registered to CT data, the agreement is sufficient that longitudinal or double CT exams could be used for evaluation of prosthetic movement.\n\nBODY.5. CONCLUSIONS:\nCT scans can be used for migration measurements in longitudinal evaluations of patients with RSA markers without losing clinically significant precision. The use of CT scans enables following RSA studies started earlier using a normal CT scan, thus avoiding the need for special RSA radiology facilities.\n\n**Question:** Compared to 2–4 stereo radiograph pair examinations what was the result of CT scan on datasets of 3D coordinates?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
585
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound‐guided foam sclerotherapy on Moving from standing to sitting without discomfort?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
590
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound‐guided foam sclerotherapy on Driving a car?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
592
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound‐guided foam sclerotherapy on Looking after children?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
598
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to Ultrasound‐guided foam sclerotherapy what was the result of Endovenous laser ablation on Bending the legs without discomfort, lifting heavy objects without discomfort, Moving from standing to sitting without discomfort, Standing still for a long time (> 15 min) without discomfort, Driving a car, Doing housework, Partial return to normal work/employment, Going out socially, Sporting activity or exercise?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
605
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Premedication with midazolam prior to cesarean delivery in preeclamptic parturients: A randomized controlled trial\n\n ABSTRACT.BACKGROUND:\nAnxiety is a concern in obstetrics, especially in preeclamptic mothers. Sedation is not commonly used in parturients for fear of adverse neonatal effect. We investigated maternal and neonatal outcome of midazolam as an adjuvant to spinal anesthesia for elective cesarean delivery.\n\nABSTRACT.METHODS:\nA prospective randomized controlled trial, in which eighty preeclamptic parturients received either an intravenous dose of 0.035 mg/kg of midazolam or an equal volume of normal saline, 30 min before spinal anesthesia. Maternal anxiety was assessed using Amsterdam Preoperative Anxiety and Information Scale (APAIS); postoperative maternal satisfaction was assessed using Maternal Satisfaction Scale for Cesarean Section (MSSCS). Newborns were assessed using Apgar score, Neonatal Neurologic and Adaptive Capacity Score (NACS), and umbilical artery blood gases.\n\nABSTRACT.RESULTS:\nMothers premedicated with midazolam showed a lower level of preoperative anxiety and a higher degree of postoperative satisfaction than the control group. There were no between-group differences regarding the neonatal outcome.\n\nABSTRACT.CONCLUSION:\nPreeclamptic parturients premedicated with midazolam (0.035 mg/kg) before spinal anesthesia have lower anxiety and higher postoperative satisfaction levels, with no adverse effects on the newborns.\n\nBODY.INTRODUCTION:\nNeuraxial anesthesia is progressively used for elective cesarean delivery as it is associated with the best maternal and neonatal outcomes. In preeclampsia, spinal anesthesia provides hemodynamic stability and best risk-benefit outcome; it avoids the detrimental effects of general anesthesia on maternal blood pressure and the risk of difficult intubation.[1] By the mid-1990s, clinical trials demonstrated the safety of neuraxial anesthesia as an accepted alternative to general anesthesia for preeclamptic patients.[234] Recently, a focused review confirmed this fact.[5] One study has reported that 60–80% of patients suffers from preoperative anxiety.[6] Some studies showed a clear correlation between anxiety and preeclampsia.[78] Anxiety is a particular concern in obstetrics, as it may lead to autonomic vasoconstriction in the uterine arteries, which may induce fetal distress.[910] Parturients may endure a great level of anxiety and discomfort during the placement of the spinal needle; a condition could be worsened by preeclampsia. Sedation for neuraxial blockade in pregnant patients remained controversial, based on its potential adverse effects on the fetus, such as decreased motor tone and respiratory compromise. In the 1960s, several case reports showed decreased motor tone in newborns related to maternal diazepam.[1112] In contrast, midazolam is three to four times more potent than diazepam and is used mainly for maternal sedation. Although it crosses the placental barrier, in small doses, it does not cause any adverse effects in the neonate.[13] However, when used in high doses of 0.3 mg/kg for the induction of general anesthesia, the umbilical vein to maternal vein ratio was high (0.96), and the elimination half-life in the neonate was 6.3 h.[14] In the 1980s, a few reports discussed the intravenous (iv) administration of midazolam during pregnancy without providing much information about its use close to delivery.[1516] More recent studies investigated the sedative effect of midazolam with or without narcotics on normal parturients.[171819] However, the impact of midazolam as a sedative on preeclamptics has not been sufficiently studied prospectively. In this concern, we decided to study the maternal and neonatal outcomes of a sedative dose of midazolam in these cases.\n\nBODY.INTRODUCTION.AIMS:\nThis work aimed to determine the effect of premedication with midazolam on preoperative anxiety, perioperative hemodynamics, and postoperative satisfaction of preeclamptic parturients scheduled for elective cesarean delivery under spinal anesthesia. It was also designed to observe the adverse effects of midazolam on the neonatal outcome.\n\nBODY.METHODS:\nAfter approval of our Institutional Ethical Committee and obtaining informed written consents from all parturients included in this study, this prospective randomized controlled study was conducted on eighty full-term, of American Society of Anesthesiologists (ASA) Class I or II, 18–40-year-old parturients presenting with mild preeclampsia as defined by the American College of Obstetricians and Gynecologists.[20] They were scheduled for elective cesarean delivery under spinal anesthesia after excluding the contraindications of technique. This study was performed in our university hospital from October 1, 2014, to September 30, 2015. Emergency cases, patients with evidence of intrauterine growth restriction or fetal compromise, patients with severe preeclampsia, thrombocytopenia, HELLP syndrome (hemolysis, elevated liver enzymes, and low platelet levels), parturients receiving magnesium therapy, or with a body mass index >30 were excluded from the study. History of cardiac, pulmonary, hepatic, renal, neuropsychiatric or other diseases or substance abuse also were excluded from the study. All patients had fetal heart monitoring preoperatively to exclude fetal heart rate (HR) abnormalities. Preoperatively, a complete blood count, prothrombin time, liver and kidney function tests were done and recorded for each participant. All patients were assessed clinically and investigated for the exclusion of any of the above-mentioned contraindications. All patients were randomly allocated into two groups, midazolam group (Group M) and normal saline control group (Group S), using a computer-generated randomization list and a sealed envelope technique (n = 40/group). The study solution (midazolam or normal saline) was prepared in a 2 mL volume, by an anesthesiologist not involved in the care of the woman or collecting the data for the study. Another anesthesiologist who was blinded to the study provided perioperative care and collected perioperative data. All patients were booked on a morning list, instructed for the same preoperative fasting protocol and were educated as to how to complete the questionnaires required for the study. On arrival of the patients to the preoperative waiting room, routine monitoring (Electrocardiography, pulse oximeter probe, and a noninvasive blood pressure cuff) were applied, and wide-bore venous access was established for the infusion of lactated Ringer's solution. Thirty minutes before the induction of spinal anesthesia, group (M) was given iv premedication with midazolam (0.035 mg/kg) in 2 mL solution, whereas group (S) was given an equal quantity of normal saline. The Amsterdam Preoperative Anxiety and Information Scale (APAIS) was used in this study for the objective analysis of anxiety in all patients [Figure 1]. It consists of six questions receiving a score between one (none) and five (most), with a total score of 30, investigating patients' concerns and anxieties. It is short, reliable, and easy to administer. Figure 1Amsterdam Preoperative Anxiety and Information Scale APAIS was applied twice by the same anesthesia assistant in a preoperative visit to all patients in their rooms 1⁄2 h before their transfer to the operating theater (A1), and 5 min after the administration of the study drug (A2). Twenty minutes before the induction of spinal anesthesia, all patients received a rapid iv infusion of 500 mL of lactated Ringer's solution as a preload, followed by a replacement rate of 15 mL/kg/h. For spinal anesthesia, 2.5 mL of intrathecal hyperbaric bupivacaine (0.5%) was injected using a 25-gauge spinal needle with patients in the lateral decubitus position at the L 3/4 interspace, under strict aseptic precautions. Parturients were then immediately placed in the tilted supine position. The level of sensory block was determined with cold and pinprick tests. Oxygen by nasal cannula (2 L/min) was applied to all patients. Urinary catheterization was performed, and surgery started when a sufficient level of sensory block (T4) was achieved. The maternal HR, systolic blood pressure (SBP), and diastolic blood pressure (DBP) noninvasive arterial pressure were recorded on arrival to the preoperative waiting room (T0), 5 min after injecting the tested drug (T1), during the insertion of spinal needle (TS), 2 min (T2), and 5 min (T5) after the induction of spinal anesthesia. Hypotension was defined as a drop of SBP <100 mmHg or a drop >20% from (T1), which was considered as the baseline value. SBP <90 mmHg was treated with ivboluses of ephedrine in increments of 10 mg. Tachycardia was defined as HR >100 and bradycardia when HR <60. When HR was lower than 50 beats/min, atropine 0.5 mg ivwas administered. Once the baby was delivered, the umbilical cord was double-clamped, and a blood sample from an umbilical artery (UA) was collected for blood gas analysis. Basic neonatal examination was performed, and Apgar scores were recorded at min 1 and 5, and the Neonatal Neurological and Adaptive Capacity Scoring (NACS) was measured and recorded at 15 min.[21] Postoperatively, patients were kept in the recovery room for 1 h and then sent to the ward. The Maternal Satisfaction Scale for cesarean section (MSSCS)[22] is a 22-item questionnaire specifically designed to evaluate maternal satisfaction with neuraxial anesthesia for elective cesarean delivery. Each item has a 7-point scale with minimum total score of 22 and maximum of 154, a higher score representing higher satisfaction. It was applied to all patients on the 1st postoperative day between 8:00 am and 1:00 pm. The development of seizures in cases of mild preeclampsia is extremely rare; however, if it took place, our protocol was to treat it with magnesium sulfate (loading dose of 4–6 g followed by continuous infusion of 1-2 g/h),[120] and the case was dropped out from the study. For sample size calculation, preoperative anxiety was taken as a primary parameter of interest. The power analysis is based on independent sample t-test and a 25% decrease of the APAIS score; the α value was 0.05 and the power (1-β) of the study was 0.80. Depending on the previous studies,[1823] it was calculated that a minimum of 36 patients would be required per group. However, we enrolled eighty patients (40/group) to allow for drop-outs. Data were analyzed using the SPSS statistics program (Version 16, SPSS Inc., Chicago, IL, USA). According to the type of data, they were represented as mean and standard deviation or frequencies and percentages. Comparisons of the two studied groups were performed using independent sample t-test or Mann–Whitney U-test as appropriate while repeated measures within the same group were compared using paired t-test. In all tests, results were considered statistically significant if P < 0.05.\n\nBODY.RESULTS:\nEighty cases were initially investigated, but five cases were excluded during the study; two cases in group (M) and one case in group (S) due to an inadequate spinal block, and one case in each group because of failure to collect umbilical arterial blood. In both groups, there were no significant differences regarding age, body weight, height, and spinal injection to delivery time as shown in Table 1. Table 1 Patients characteristics in both groups After administration of the tested drug (midazolam or normal saline), there was a significant reduction in APAIS readings (A2) in group (M); with a significant difference in (A2) between the two groups while there was no between-group difference before administration (A1) as shown in Table 2. Maternal Satisfaction Scale for cesarean section showed a significant level of maternal satisfaction in group (M) compared to group (S). There was no difference between the groups regarding the newborn Apgar or NACS scores as shown in Table 2. No significant differences were found between the two groups regarding the pH, PaCO2, PaO2, and base deficit in the umbilical arterial samples as listed in Table 3. Table 2 Amsterdam Preoperative Anxiety and Information Scale, Maternal Satisfaction Scale for cesarean section, Apgar score and neonatal neurologic and adaptive capacity score in both groups Table 3 Blood gas analysis of umbilical artery blood samples in both groups Regarding maternal hemodynamics, baseline HR, SBP, and DBP were comparable between the two groups before the administration of the tested drug. However, 5 min after injecting the tested drug, there was a significant decrease in all parameters in group (M) compared to group (S). During insertion of the spinal needle, there was a rise in HR and SBP in group (S) compared to group (M) while DBP was at a lower level in group (M). After the induction of spinal anesthesia and placing the patient into the tilted supine position, there was an increase in HR with some decrease in blood pressure parameters in both groups as shown in Tables 4–6. Table 4 Maternal heart rate in both groups Table 5 Systolic blood pressure in both groups Table 6 Diastolic blood pressure in both groups Hypotension occurred in three patients in the group (M) and one patient in the group (S), requiring a single bolus of ephedrine for each. No bradycardia was observed in any patient. Hemodynamic parameters then remained stable and comparable in both groups.\n\nBODY.DISCUSSION:\nThis study has shown a significant improvement of preoperative anxiety and postoperative satisfaction in parturients premedicated with midazolam compared to those in the control group, with no significant neonatal adverse effects. The study also documented improved maternal hemodynamics in the treated group before the administration of spinal anesthesia. However, there was no significant difference between groups regarding maternal hemodynamics throughout the rest of the procedure. The progressive increase in the number of cesarean sections[24] draws the attention to the psychological aspects of parturients, including their level of anxiety and perception of obstetric care they received.[25] Some studies tried to solve the problem of preoperative anxiety via nonpharmacological measures, such as the use of a preoperative information video in obstetric[26] and nonobstetric[27] patients, but their results were disappointing. However, other studies applied a preoperative information system and proved its effectiveness on the level of anxiety and postoperative satisfaction in nonobstetric patients.[282930] Midazolam is a commonly used sedative as it is fast-acting and of short duration, making it superior to other benzodiazepines. It is a lipophilic drug that can cross the placenta by passive diffusion. In large induction doses, midazolam and its metabolites cross the placenta in both animal[31] and human[14] studies. However, early in 1984, Kanto et al.[32] proved its excellent sedative effect following cesarean delivery performed under epidural anesthesia. They gave a relatively large iv dose (0.075 mg/kg) following baby removal, resulted in a rapid and marked sedative effect lasted for 2–3 h while patients were completely cooperative in the recovery room. Frölich et al.[19] gave a single iv dose of 0.02 mg/kg midazolam plus 1 μg/kg fentanyl to parturients undergoing cesarean delivery shortly before spinal needle insertion. Similar to our study, Apgar scores showed no difference between the treated group and the control group. They evaluated maternal anxiety with two methods; maternal plasma catecholamine levels at the time of delivery and three postoperative questions. They found a higher level of patient satisfaction in the treated group; otherwise, there were no between-group differences. In contrast, Senel and Mergan[18] gave a single iv dose of 0.025 mg/kg midazolam to parturients 30 min before cesarean delivery. They evaluated maternal anxiety using APAIS score and newborns with Apgar and NACS scores. They reported significantly lower anxiety scores in the treated groups, without any adverse neonatal outcome. However, they recommended further clinical trials with higher sample size to assess the larger doses of midazolam. Similar findings were found by Fung et al. in 1992, where 90% of mothers fell asleep with midazolam premedication before the start of surgery, with no adverse neonatal outcome; comparing the Apgar scores and umbilical venous pH values to the control group.[17] We chose the midazolam dose (0.035 mg/kg) based on these previous studies,[1819] and according to our own experience with Egyptian parturients, aiming for the best degree of maternal satisfaction without causing maternal hypoventilation or neonatal depression. We chose a single bolus based on body weight; however, in clinical practice, some may prefer to titrate iv drugs to effect. Thus, our results cannot be compared to other studies with repeated doses or iv infusion. The timing of premedication is also crucial. We chose it 30 min before the induction of spinal anesthesia, to allow enough time for suppression of anxiety; an important consideration in preeclampsia, whose cardiovascular responses are exaggerated. This is in contrast to Frölich et al., who gave their premedication immediately before the spinal anesthesia procedure. We employed several parameters of neonatal outcome. The Apgar scoring system is routinely used, due to its clear physiological correlation with neonatal well-being.[33] It is easily applied and gives a quick idea about newborn's clinical condition and efficiency of resuscitation efforts. We also used the NACS scoring system, a neurobehavioral scale that allows for the evaluation of potential effects of neonatal drug exposure. It is designed to distinguish drug-induced neonatal depression from that secondary to asphyxia. It stresses on motor tone as a main indicator of drug-induced depression.[21] However, Brockhurst et al. indicate its poor reliability regarding obstetric anesthesia research.[21] We also recorded UA blood gas values and noted no significant difference between groups. Again, the reliability of this test to detect neonatal depression has been questioned.[3435] That is why we employed a combination of several parameters to assess neonatal outcome. As part of this study, we evaluated the postoperative maternal satisfaction of anesthesia and birth. It is an important outcome that is commonly reported;[253637] since in healthy parturients, a pleasant experience is expected. Furthermore, anxiety may decrease the level of overall satisfaction with perioperative care.[38] In 1998, Robinson et al. highlighted some important factors that may affect satisfaction in the obstetric patient.[36] The development of a reliable and valid satisfaction scale for cesarean delivery has allowed to assess the effectiveness of sedatives on maternal satisfaction.[22] MSCCS has proven more sensitive than the standard VAS as it samples from the various physiological and psychological factors that make up satisfaction.[39] The reliability of a scale is directly related to the number of its items so that VAS would be expected to be of low reliability.[22] Our data revealed a remarkable level of maternal satisfaction in the group premedicated with midazolam compared to the control group. To the best of our knowledge, none of the studies concerned with sedatives during cesarean delivery under spinal anesthesia focused on this scale. Instead, Frölich et al. applied a simple questionnaire of three questions while Senel and Mergan and Fung et al. did not focus on this important outcome. The main limitation of this study was the inability to measure the placental transfer of midazolam and its plasma level in the fetal circulation, due to financial difficulties. However, this had been widely investigated by several authors.[143140] Another limitation is this study only enrolled mild preeclamptic patients. Thus, the extension of these findings to those with more severe features of preeclampsia (and potentially more adverse uteroplacental blood flow) remains to be investigated.\n\nBODY.CONCLUSION:\nA dose of 0.035 mg/kg of iv midazolam, administered 30 min before spinal anesthesia, is associated with improved preoperative anxiety, postoperative satisfaction, and maternal hemodynamics in preeclamptic parturients scheduled for elective cesarean delivery, with no adverse effects on the newborn.\n\nBODY.CONCLUSION.FINANCIAL SUPPORT AND SPONSORSHIP:\nNil.\n\nBODY.CONCLUSION.CONFLICTS OF INTEREST:\nThere are no conflicts of interest.\n\n**Question:** Compared to equal volume of normal saline, 30 min before spinal anesthesia what was the result of intravenous dose of 0.035 mg/kg of midazolam on postoperative satisfaction?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
614
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Biochemical markers of bone turnover in patients with spinal metastases after resistance training under radiotherapy – a randomized trial\n\n ABSTRACT.BACKGROUND:\nTo compare the effects of resistance training versus passive physical therapy on bone turnover markers (BTM) in the metastatic bone during radiation therapy (RT) in patients with spinal bone metastases. Secondly, to evaluate an association of BTM to local response, skeletal-related events (SRE), and number of metastases.\n\nABSTRACT.METHODS:\nIn this randomized trial, 60 patients were allocated from September 2011 to March 2013 into one of the two arms: resistance training (Arm A) or passive physical therapy (Arm B) with thirty patients in each arm during RT. Biochemical markers such as pyridinoline (PYD), desoxy-pyridinoline (DPD), bone alkaline phosphatase (BAP), total amino-terminal propeptide of type I collagen (PINP), beta-isomer of carboxy-terminal telopeptide of type I collagen (CTX-I), and cross-linked N-telopeptide of type I collagen (NTX) were analyzed at baseline, and three months after RT.\n\nABSTRACT.RESULTS:\nMean change values of PYD and CTX-I were significantly lower at 3 months after RT (p = 0.035 and p = 0.043) in Arm A. Importantly, all markers decreased in both arms, except of PYD and CTX-I in arm B, although significance was not reached for some biomarkers. In arm A, the local response was significantly higher (p = 0.003) and PINP could be identified as a predictor for survivors (OR 0.968, 95%CI 0.938–0.999, p = 0.043). BAP (OR 0.974, 95%CI 0.950–0.998, p = 0.034) and PINP (OR 1.025, 95%CI 1.001–1.049, p = 0.044) were related with an avoidance of SRE.\n\nABSTRACT.CONCLUSIONS:\nIn this group of patients with spinal bone metastases, we were able to show that patients with guided resistance training of the paravertebral muscles can influence BTM. PYD and CTX-I decreased significantly in arm A. PINP can be considered as a complementary tool for prediction of local response, and PINP as well as BAP for avoidance of SRE.\n\nABSTRACT.TRIAL REGISTRATION:\nClinical trial identifier NCT 01409720. August 2, 2011.\n\nBODY.BACKGROUND:\nSpinal bone metastases represent the most frequent site of skeletal metastasis [1], and radiotherapy (RT) is the most common treatment option of bone metastases in an advanced tumour disease [2]. As a result of the alteration in bone remodeling activity in patients with bone metastases, various markers of bone formation or resorption have been investigated as a measurement of activity of skeletal metastases. Balance between bone resorption and bone formation is required for maintenance of bone metabolism, and can be reflected by bone turnover markers (BTM). The usefulness of BTM as a tool for the diagnosis of bone metastases in many types of cancers has been investigated previously [3]. Previous studies have demonstrated that some BTM can indicate the existence of bone metastases [4], and have analyzed their relationship with clinical outcomes [5–9]. The two most clinically relevant markers of bone turnover in patients with skeletal metastases are bone-specific alkaline phosphatase (BAP) and N-terminal telopeptide of collagen type I (NTX). Total amino-terminal propeptide of type I collagen (PINP) and BAP are markers of bone formation; BAP has a linear relationship with osteoblast and osteoblastic precursor activity. NTX is a breakdown product of type I collagen produced by various proteolytic enzymes during bone resorption and dissolution of the organic bone matrix. Additionally, pyridinium cross-links pyridinoline (PYD) and desoxy-pyridinoline (DPD) are well-characterized markers for bone resorption. C-terminal cross-linking telopeptide of type I collagen (CTX-I) can be considered a complementary tool for prediction of clinical outcome as a marker of resorption. At the present time, although biochemical markers of bone turnover have shown some utility in clinical trials by predicting clinical outcomes such as death or development of SREs, they are not routinely used in clinical practice. Questions remain as to how clinicians should best use these markers to select and time appropriate treatments among patients with bone metastases. The use of these markers may, in the future, better allow physicians to selectively treat patients with bone metastases. In our recent work, we could show the feasibility of resistance training in patients with spinal bone metastases under RT [10]. Importantly, we demonstrated that the bone density in metastases of the resistance training arm was significantly higher than in the control arm after 3 months. Therefore, we hypothesized that resistance training concomitant to RT could influence bone metabolism, and would be reflected by BTM. The aim of this randomized trial was to compare the effect of resistance training concomitant to RT versus RT only on BTM, such as BAP, PINP, CTX-I, PYD, DPD, and NTX in patients with spinal bone metastases. Secondly, we evaluated an association of these markers as predicted factors for local response, prevention of SRE, and number of metastases.\n\nBODY.METHODS.SETTINGS AND PATIENTS:\nThis is a randomized, controlled, two-armed intervention trial. A block randomization approach with block size 6 was used to ensure that the two intervention groups were balanced. After the baseline measurements, the patients with stable bone metastases were assigned to the respective treatment groups on a 1:1 basis according to the randomization list. Arm A (intervention group, resistance training) and arm B (control group, passive physical therapy) each consisted of 30 patients. The blood and urine parameters were measured before start of RT (t0) (day of the first fraction), and after three months (t2) on an empty stomach. The primary endpoint was to compare BTM after 3 months in patients treated with resistance training concomitant to RT versus RT only in spinal bone metastases. Secondary endpoint was to evaluate predictive bone markers for local response, prevention of SRE, and number of metastases. Local response of metastasis was assessed on the basis of computed tomography. Positive local response was defined as complete or partial re-calcification on the basis of computed tomography observation at 3 months after RT. The definition of SRE was the first of any of the following events: pathological fractures, severe pain (increase of more than 2 points according to numeric rating scale), hypercalcaemia, and spinal cord compression. Number of metastases was classified in 1 or >1 (solitary vs. multiple). The data of the patient records were collected by the authors. Patient characteristics are shown in Table 1.Table 1Patient characteristics at baselineIntervention group (n = 30)Control group (n = 30)\nn\n%\nn\n%Age (years)Mean (SD)61.3 +/− 10.164.1 +/− 10.9GenderMale1446.71963.3Female1653.31136.7Karnofsky-index (median, range)80 (70–100)80 (70–100)Primary siteLung cancer129.2826.6Breast cancer516.7620.1Prostate cancer516.7930.1Melanoma13.313.3Renal cancer13.326.7Other620.1413.4Localization metastasesThoracic1756.71446.7Lumbar930.01343.3Thoracic and lumbar26.726.7Sacrum26.713.3Number metastasesMean (range)1.4 (2–4)1.7 (1–5)Solitary2273.31860.0Multiple826.71240.0Type of metastasesosteoblast930.01033.3osteolytic2170.02066.7Size of metastasisMean (SD)318.6 +/− 230.0380.7 +/− 193.6Distant metastases at baselineVisceral1240.0516.7brain310.0310.0lung723.3413.3tissue826.7620.0Hormonotherapy1033.31653.3Immunotherapy723.3516.7Chemotherapy2583.32066.7Previous SRE930.01343.3Abbreviation: SD standard deviation Inclusion criteria were an age of 18 to 80 years, a Karnofsky performance score [11] ≥ 70, written consent to participate, and already initiated bisphosphonate therapy. Patients were excluded if they presented with concomitant pathologies that could interfere in the evaluation of bone turnover markers, such as bone metabolic disorders, Paget`s disease of bone, hyperparathyroidism, thyroid abnormalities, abnormal intestinal absorption, or hepatic or renal dysfunction. In all cases, anticancer treatment could be changed as clinically indicated throughout the course of the study. The patients were subjected to a staging of their vertebral column within the context of the computed tomography (CT) designed to plan the radiation schedule prior to enrolment into the trial. In this examination metastases were classified as \"stable\" or \"unstable\". This was diagnosed independently by a specialist for radiology as well as by a specialist for orthopedic surgery. The specifications for an unstable vertebral body were tumor occupancy more than 60 % of the vertebral body, and pedicle destruction [12]. Only a metastasis classified by both specialists as \"stable\" was suggested eligible for inclusion. Patients with significant neurological or psychiatric disorders were excluded. The study was approved by the Heidelberg Ethics Committee (Nr. S-316/2011).\n\nBODY.METHODS.RADIOTHERAPY:\nRT was performed in the Radiooncology Department of the Heidelberg University Clinic. After virtual simulation was performed to plan the radiation schedule, RT was carried out over a dorsal photon field of the 6MV energy range. Primary target volume (PTV) covered the specific vertebral body affected as well as the ones immediately above and below. In Arm A, 24 patients (80 %) were treated with 10 × 3 Gy, three patients (10 %) with 14 × 2.5 Gy, and three patients (10 %) with 20 × 2 Gy. In Arm B the dose fractions for 28 patients (93.4 %) were 10 × 3 Gy, for one patient (3.3 %) 14 × 2.5 Gy, and for one patient (3.3 %) 20 × 2 Gy. The median individual dose in all patients was 3 Gy (range 2–3 Gy), the median total dose 30 Gy (range 20–35 Gy). The individual and total doses were decided separately for each individual patient, depending on the histology, the patient's general state of health, and on the current staging and the corresponding prognosis.\n\nBODY.METHODS.STUDY INTERVENTIONS:\nThe interventions commenced on the same day as RT and were performed on each day of RT treatment (Monday through Friday) over a 2-week period, independent of the number of fractions. During the 2-week RT period, the patients in the resistance training arm (Arm A) performed the exercises under the guidance of a physiotherapist. The patients were then instructed to practice the training in their homes three times a week and continued the resistance training themselves until the last investigation after three months, and conducted the documentation in form of a training diary. The resistance training lasted approx. 30 min, the passive physical therapy (Arm B) approx. 15 min. Since the site of the bone metastases differed from patient to patient, three different exercises were enacted to ensure an even isometric resistance training of the paravertebral muscles. Patients in the control arm (Arm B) received passive physical therapy in form of breathing exercises also for a period of 2 weeks. This was conducted so that these patients were not discouraged in comparison to the intervention arm, and for avoidance of a high drop-out rate. A detailed report of the intervention and its application has already been published [13].\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nBone marker values were presented in terms of mean and standard deviation (SD). The possible difference between arms at 3 months was calculated using ANCOVA with arm as factor and baseline value as covariate. Therefore as all surviving patients completed all surveys we assumed that 23 patients in Arm A and 24 patients in Arm B were eligible for the analysis. Multivariate binary logistic regression models were fitted to quantify the degree of association between potential predictors of markers at baseline according to local response (yes/no), prevention of SRE (yes/no), and number of metastases (1/more than one). Group had been included as a covariate in the analysis. Arm A with 23 survived patients was the reference. Level of significance was set to 0.05 for all tests. All statistical analyses were done using SAS software Version 9.4 (SAS Institute, Cary, NC, USA).\n\nBODY.METHODS.ANALYTIC METHODS:\nSamples were centrifuged at 3.500 rpm for 10 min after drawing, serum obtained and stored at −80°. First morning void of urine was collected, mixed and an aliquot of 10 ml stored at – 80 °C. Samples shipped on dry ice to the laboratory (Labor Limbach, Heidelberg, Germany) and again stored at −80 °C until analysis. Total aminoterminal propeptide of type I collagen (PINP; bone formation; interassay coefficient of variation [CV]: 2.7 %), C-terminal cross-linking telopeptide of type I collagen (CTX-I; bone resorption; CV: 3.1 %) and intact parathyroid hormone (iPTH; CV: 1.4 %) were measured by means of an automated electrochemiluminescence immunoassay (ECLIA; Modular Analytics E170, Roche Diagnostics, Penzberg, Germany). Bone-specific alkaline phosphatase (BAP; bone formation; CV: 5,2 %) was measured by means of a spectrophotometric immunoassay (IDS-ISYS Ostase BAP; Immunodiagnostic Systems Ltd [IDS Ltd], Boldon, Tyne & Wear, UK) on the fully automated immunoassay system (IDS-ISYS (Immunodiagnostic Systems Ltd [IDS Ltd], Boldon, Tyne & Wear, UK). Pyridinoline (PYD) and deoxypyridinoline (DPD; bone resorption; CV: 12.5 %) were assayed by high performance liquid chromatography (HPLC) [14, 15]. Analysis required first a sample extraction, hydrolysis and automated column-chromatographic pre-purification. To avoid any effect of different dilutions of urine, Pyr and Dpyr data are expressed against gram of creatinine. Cross-linked N-telopeptides of type I collagen (NTx; bone resorption; CV: 12.8 %) was determined with an commercially available ELISA (Osteomark Ntx; Wampole Laboratories, Princeton NJ, USA). All laboratory analyses were done in two batches after completion of the sample collection. To further reduce imprecision of measurement, all samples were analyzed utilizing one reagent lot. Tests and instruments were run strictly in accordance with the guidelines given by the manufacturer and were subject to continuous maintenance and service according to the laboratories standard operating procedures for good laboratory practice. Samples were thawed at the day of analysis at ambient temperature, mixed on a head-over-head mixer and centrifuged before measurement.\n\nBODY.RESULTS:\nFrom September 2011 to March 2013, consecutive 80 patients with a histologically confirmed cancer of any primary and spinal bone metastases of the thoracic or lumbar segments, or of the sacral region were considered in the Radiooncology Department of the Heidelberg University Clinic. Fifteen patients were excluded due to unstable metastases, and five patients declined to participate in the study. Sixty patients fulfilled the inclusion and were enrolled into the trial (Fig. 1). Arms were balanced at baseline without group differences, particularly number of metastases and tumor size (Table 1). Seven patients (23.3 %) died in arm A within the first 12 weeks following RT, six patients in arm B (20.0 %) died within 3 months. The mean follow-up was 3.3 months for both groups. All surviving patients completed all surveys. Mortality did not differ between groups.Fig. 1Flow of participants through the trial Mean change values of PYD and CTX-I were significantly lower at 3 months after RT (p = 0.035 and p = 0.043) in arm A. DPD showed a tendency with a lower mean change value, but not significant. No changes were seen between the arms in BAP, NTX, and PINP. Importantly, all markers decreased in both arms, except of PYD and CTX-I in arm B, although significance was not reached for some biomarkers (Table 2).Table 2Results of bone turnover markersArm AArm BDifference between armsbaseline, n=233 months, n=23baseline, n=243 months, n=24LS meansParameterReference valuemeanSDmeanSDmeanSDmeanSDDifference (95 % CI)\np-value*PYD160–280 μg/g Cr375.63206.20299.89173.88402.00199.01405.50187.32−90.92 (−175.19; −6.65)0.035DPD26–65 μg/g Cr67.3039.6451.2035.8186.1078.8572.8837.11−18.20 (−38.84; 2.45)0.082CTX-I<0.5484 ng/ml0.290.270.200.230.390.370.420.45−0.10 (−0.20; −0.003))0.043BAP6–15 μg/l56.8992.3131.4663.3144.1747.3236.5237.58−12.10 (−31.51; 7.31)0.215NTX5.4–24.2 nmol/l19.097.2117.427.9821.429.3620.389.81−0.87 (−3.84; 2.10)0.556PINP15–59 μg/l67.5440.6949.5041.4387.9651.9866.7739.18−6.94 (−26.81; 12.93)0.485*Results of ANCOVAThis table shows the results of bone turnover markers of both groups at baseline and 3 months after RT. Differences are presented within and between groupsAbbreviation: LS means least square means In arm A, the local response was significantly higher [73.9 % (n = 17) vs. 25 % (n = 6)] (p = 0.003). At first, PINP could be identified as a predictor of local response (p = 0.043). BAP (p = 0.034) and PINP (p = 0.044) could be identified as a predictor for avoidance of SRE. No marker could be detected for solitary metastasis (Table 3).Table 3Results of multivariate binary logistic regression analysis of bone markersLocal response (no local response)OR95 % CI (OR)\np-value Arm (A – reference)0.0330.004 – 0.3110.003 PYD1.0000.992 – 1.0080.986 DPD1.0230.974 – 1.0740.371 CTX-I10.5500.003 – >1000.569 BAP1.0110.996 – 1.0250.142 NTX0.8700.688 – 1.1010.245 PINP0.9680.938 – 0.9990.043Prevention of SRE (SRE = 0)OR95 % CI (OR)\np-value Arm (A – reference)3.3020.877 – 12.4310.077 PYD0.9980.994 – 1.0020.373 DPD1.0060.982 – 1.0310.633 CTX-I0.3230.001 – 81.3360.689 BAP0.9740.950 – 0.9980.034 NTX0.9880.861 – 1.1340.865 PINP1.0251.001 – 1.0490.044Solitary metastasis (n = 1)OR95 % CI (OR)\np-value Arm (A – reference)1.6920.477 – 6.0070.416 PYD1.0010.997 – 1.0050.512 DPD1.0040.980 – 1.0290.761 CTX-I0.3590.001 – >1000.723 BAP0.9970.985 – 1.0090.592 NTX1.0830.939 – 1.2490.275 PINP0.9980.981 – 1.0160.829Abbreviation: SRE skeletal-related events\n\nBODY.DISCUSSION:\nBone metastases are a very frequent secondary diagnosis associated with an advanced tumor disease, with the vertebral column being the most frequent localization [2, 10]. The usefulness of BTM as tool for the diagnosis of bone metastases in many types of cancers has been investigated previously [3]. Many studies have analyzed their relationship with clinical outcomes [7, 16] as well as their clinical importance [17, 18]. However, there is still no concensus among these studies as to which BTM is the ideal marker. In the current work, all BTM in arm A decreased after 3 months, especially the mean change values of PYD and CTX-I were significantly lower at 3 months after RT due to resistance training. BTM in arm B showed also decreased values after 3 months except of PYD and CTX-I, but not so distinctive as in arm A. Importantly, local response as re-calcification in metastasis was improved in arm A due to resistance training. In our recent work, we could show that bone density was significantly increased in metastasis 3 months after start of resistance training concomitant to RT [19]. This factor may influence the BTM and was demonstrated in our results. Therefore, PINP could be detected as a strong marker for bone formation after combined treatment with resistance training concomitant to RT. Determination of CTX-I is recommended for monitoring the efficacy of antiresorptive therapy. Patients with renal cell carcinoma who died or progressed had higher baseline B-CTX levels and those who experienced SRE during follow-up showed high BAP levels [20]. However, in our results we could not found any useful data in normal/abnormal values of BTM. A multicenter study found that increased values of BAP and NTX were associated with increased risk of SRE, and disease progression in patients with breast cancer, prostate cancer, and other solid tumors, treated with bisphosphonates [21]. These markers of bone turnover have been used clinically to predict the risk of SREs and disease outcomes. According our results, we could identify BAP and PINP for avoidance of SRE. In addition, biochemical markers of bone turnover may prove useful in the monitoring of patients on bisphosphonate treatment for bone metastases. One review [22] of 121 patients with skeletal metastases from variety of solid tumors revealed that mean values of NTX were consistently elevated in patients experiencing SREs compared with those who did not. An additional study [23] showed in 441 patients that patients with high NTX or BAP levels had a greater incidence of SREs compared with patients with low levels of NTX or BAP. Patients with increased NTX or BAP at baseline had an increased risk for SRE, shorter time to first event, disease progression and death. As NTX levels have been shown to correlate with response to bisphosphonate treatment. In our data, we could not show any correlation of NTX to local response, SRE, and number of metastases, but we were able to confirm the relation to BAP with prevention of SRE. Zoledronic acid normalizes or maintains normal NTX levels in most patients with bone metastases. All of our participants were treated with bisphosphonates. Our data presented no differences between arms of NTX. However, a PINP could be identified as a predictor for local response and avoidance of SRE. In this novel study, our results showed that resistance training concomitant to RT can even enhance an effect on BTM. Since all patients were in advanced stages of their cancer, 40 % of the patients in either group were lost to follow-up due to progressive disease and subsequently death. Further limitations of the study were the relatively small sample size, the variety of primary tumors and patient conditions, and the exclusion of patients presenting with cervical spine metastases. Among the strengths of our novel and original study were the randomized design and the very first presentation of bone turnover markers in combined treatment of resistance training concomitant to RT in patients with spinal bone metastases.\n\nBODY.CONCLUSION:\nIn this group of patients with spinal bone metastases, we were able to show that patients with guided resistance training of the paravertebral muscles can influence BTM. PYD and CTX-I decreased significantly in arm A. PINP can be considered as a complementary tool for prediction of local response, and PINP as well as BAP for avoidance of SRE. More studies in larger groups of patients are necessary for further confirmatory research.\n\nBODY.ETHICS APPROVAL:\nThe study was approved by the Heidelberg Ethics Committee (Nr. S-316/2011).\n\nBODY.CONSENT FOR PUBLICATION:\nNot applicable.\n\nBODY.AVAILABILITY OF DATA AND MATERIALS:\nThe dataset supporting the conclusions of this article is included within the article\n\n**Question:** Compared to passive physical therapy what was the result of resistance training dyrung radiation therapy on beta-isomer of carboxy-terminal telopeptide of type I collagen?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
693
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Routine resite of peripheral intravenous devices every 3 days did not reduce complications compared with clinically indicated resite: a randomised controlled trial\n\n ABSTRACT.BACKGROUND:\nPeripheral intravenous device (IVD) complications were traditionally thought to be reduced by limiting dwell time. Current recommendations are to resite IVDs by 96 hours with the exception of children and patients with poor veins. Recent evidence suggests routine resite is unnecessary, at least if devices are inserted by a specialised IV team. The aim of this study was to compare the impact of peripheral IVD 'routine resite' with 'removal on clinical indication' on IVD complications in a general hospital without an IV team.\n\nABSTRACT.METHODS:\nA randomised, controlled trial was conducted in a regional teaching hospital. After ethics approval, 362 patients (603 IVDs) were randomised to have IVDs replaced on clinical indication (185 patients) or routine change every 3 days (177 patients). IVDs were inserted and managed by the general hospital medical and nursing staff; there was no IV team. The primary endpoint was a composite of IVD complications: phlebitis, infiltration, occlusion, accidental removal, local infection, and device-related bloodstream infection.\n\nABSTRACT.RESULTS:\nIVD complication rates were 68 per 1,000 IVD days (clinically indicated) and 66 per 1,000 IVD days (routine replacement) (P = 0.86; HR 1.03; 95% CI, 0.74-1.43). Time to first complication per patient did not differ between groups (KM with log rank, P = 0.53). There were no local infections or IVD-related bloodstream infections in either group. IV therapy duration did not differ between groups (P = 0.22), but more (P = 0.004) IVDs were placed per patient in the routine replacement (mean, 1.8) than the clinical indication group (mean, 1.5), with significantly higher hospital costs per patient (P < 0.001).\n\nABSTRACT.CONCLUSIONS:\nResite on clinical indication would allow one in two patients to have a single cannula per course of IV treatment, as opposed to one in five patients managed with routine resite; overall complication rates appear similar. Clinically indicated resite would achieve savings in equipment, staff time and patient discomfort. There is growing evidence to support the extended use of peripheral IVDs with removal only on clinical indication.\n\nABSTRACT.REGISTRATION NUMBER:\nAustralian New Zealand Clinical Trials Registry (ANZCTR) Number ACTRN12608000421336.\n\nBODY.BACKGROUND:\nPeripheral intravenous device (IVD) insertion is the most commonly performed invasive procedure in hospitalised patients, with an estimated 150 million peripheral intravenous devices placed each year in North America alone [1]. IVDs are vital for delivery of hydration, medicines and nutrition but are not without complications. Serious adverse outcomes are fortunately rare, with IVD-related bloodstream infection reported in a recent meta-analysis of 110 studies to occur in 0.1% of devices and 0.5 per 1,000 device days [2]. IVD treatment is more frequently interrupted by phlebitis, an irritation of the vein characterised by pain, tenderness on palpation, erythema, warmth, swelling, induration or palpable cord (thrombosis) of the cannulated vein; diagnostic algorithms usually require two or more of these conditions [3-5]. Phlebitis is in almost all cases a biochemical reaction to the mechanical irritation by the presence of the IVD and associated infusate [3], although phlebitis symptoms such as erythema may be misperceived as indicative of an infection. In fact, there is not a high correlation between phlebitis and device infection, and the Centers for Disease Control (CDC) states that infection is rarely associated with peripheral, as opposed to central, venous devices [3,6,7]. Fluid infiltration or 'tissuing' of devices is another common IVD complication which may also reflect the inflammatory (phlebitic) response of the vein, rather than simple misplacement of the device tip [8]. Early cohort studies noted an association between increased device time in situ and phlebitis [9,10]. This association was responded to with policies for routine device removal. Recommended timelines for routine resite have been extended over the past three decades from 24, to 48, then to 72 hours. Currently, 72- to 96-hour resite is recommended to reduce phlebitis by the CDC's 2002 Guidelines for the Prevention of Intravascular Device Infection, with the exemption that this is not required in children or those with poor veins [7]. Such policies cause increased workload in hospitals, where the task of removing and replacing well-functioning IVDs generally falls to busy nursing and junior medical staff. In addition, few patients welcome the prospect of additional venipuncture. Despite the general clinical acceptance of routine IVD replacement as a phlebitis and infection prevention measure, it has not been supported by recent data. It may be that the risk of complications during the entire IVD treatment episode is similar, regardless of whether multiple short-dwell or fewer longer-dwell IVDs are used over this time. Three small (n = 47-60) randomised, controlled trials (RCTs) suggested routine replacement at 12, 24 or 48 hours may reduce phlebitis compared to resite on clinical indication, although a systematic review for the Swedish Council on Technology Assessment in Healthcare assessed these as low- to medium-quality studies providing 'limited scientific evidence' [11-14]. More recently, two well-conducted RCTs found no evidence of effect when comparing IVD replacement every 3 days with replacement only on clinical indication for medical and surgical inpatients [15,16]. The largest of these studies reported findings from 755 general medical and surgical patients with 1,428 IVDs and found a 5% difference in combined phlebitis and infiltration rates per patient (38% clinically indicated resite, 33% routine resite), suggesting a potential small clinical benefit of 3-day resite [15]. However, this difference was not statistically significant (RR 1.15; 95% CI, 0.95-1.40) and disappeared when overall cannulation time was considered (59.8/1,000 IVD days clinically indicated resite, 60.9/1,000 IVD days routine resite; RR 0.98; 95% CI 0.78-1.24) [15]. In addition, no clinically important or statistically significant differences were observed in the secondary endpoints of phlebitis, infiltration, occlusion, local infection or suspected bloodstream infection rates between study groups [15]. Another recent RCT in the 'hospital in the home' community setting also found no important clinical or statistically significant difference in phlebitis, occlusion or bloodstream infection rates in 316 patients when resite every 3 days was compared with clinically indicated resite [17]. A 2010 Cochrane Collaboration review concluded there was 'no conclusive evidence of benefit' of routine IVD resite and suggested organisations could consider adopting a resite on clinical indication policy [18]. There is growing evidence that routine IVD replacement may be ineffective, although caution has been urged in light of the large number (74% in both groups in the largest study to date) of reported devices inserted by a specialised IV team, a factor known to reduce complications [19]. Device insertion (and reinsertion) is unpleasant for patients, requires skilled and available clinical staff, and has associated costs for the health sector. If replacement only on clinical indication is safe and effective, this would have important benefits for patients and the health system. We report a RCT of 3-day routine IVD resite versus clinically indicated replacement in a medical-surgical hospital where IVDs were inserted by the general medical and nursing staff; the insitution did not have a specialised IV service.\n\nBODY.METHODS.AIM:\nThe aim of the study was to compare the impact of 3-day routine resite, with clinically indicated resite, on peripheral IVD complications.\n\nBODY.METHODS.DESIGN:\nOpen (nonblinded), parallel group RCT.\n\nBODY.METHODS.ETHICS:\nThe study was approved by the Tasmanian State Human Research Ethics Committee. Written informed consent was obtained prospectively from all participants.\n\nBODY.METHODS.SETTING AND SAMPLE:\nThe study was undertaken at a large regional teaching hospital in Australia which functions as the tertiary referral centre for the northern half of the State of Tasmania. The hospital has more than 32,000 separations per annum, with a spectrum of medical and surgical specialties. Eligible patients were at least 18 years of age and scheduled or expected to have a peripheral IVD indwelling for at least 4 days, and they gave written informed consent. Exclusion criteria were immunosuppression, current bloodstream infection or an IVD already in situ for >48 hours. IVDs were inserted and cared for by the general nursing and medical staff; there was no special IV team or service.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size calculations were performed using PASS 2008 (Version 8.0.8; Kaysville, UT) to detect a change in rates by 30% (from 36% to 25%, two-tailed α = 0.05, 90% power) on the basis of the complication rates of routinely resited IVs in a previous study [16]. Although this indicated that n = 378 per group (total 756) were required, the study was ceased early (total n = 606 IVs) because all investigators left the employment of the institution. Consequently, study power was reduced, but remained over 80% (required minimum n = 282 per group).\n\nBODY.METHODS.RECRUITMENT:\nAll adult patients admitted to the inpatient acute medical and surgical wards of the study hospital were screened by a full-time research nurse. This excluded paediatric, day-surgery, mental health, obstetric, critical care and dialysis units.\n\nBODY.METHODS.STUDY PROCEDURES:\nPatients were randomly assigned (computer generated) in a 1:1 allocation ratio to either the 'routine replacement' (control) or 'clinically indicated replacement' (intervention) group. Assignment was concealed until randomisation by use of a telephone service. A tag was placed on the insertion site indicating the study group. All devices for the patient were managed as per randomised group. The intervention group did not have their IVD resited unless clinically indicated. This decision was made by the treating clinician (not the investigators), who ordered IVD resite if the device failed or phlebitis occurred and ongoing IV treatment was required. The control group had a new device relocated to a different site by the general medical or nursing staff every 3 days. Control devices could also be removed at any stage by the clinical staff if they were not required or if complications occurred. Clinical nursing and medical staff undertook insertion and follow-up care of all IVDs as per the CDC Guidelines [7]. Laboratory staff undertaking microbiological culture assessments were blinded to the study group. Due to the nature of the intervention, patients, research, and clinical staff were unable to be blinded. However, the investigators had no involvement in assessing or documenting complications. IVDs were assessed by the clinical nursing staff on each nursing shift for complications as part of standard clinical practice in the hospital. Times and dates of device insertion and removal were recorded along with the reason for device removal and any protocol deviations. A full-time research nurse collected data from the hospital chart and sought clarification from patients and clinical staff if necessary. Microbiological investigations (device tip, blood cultures and site cultures) were performed by the clinical staff on clinical suspicion of infection by the treating clinician. Demographic and clinical data were collected on age, sex, diagnosis at hospital admission, phlebitis risk based on Tagar et al.'s classification (low/medium/high risk) [20], past history of phlebitis, any comorbidities requiring active medical treatment (e.g., type 2 diabetes or congestive heart failure), haemoglobin, concurrent infection at other sites, antibiotic therapy, type of surgery, type of infusate and any additives (and their level of irritability), vein and skin quality assessment, size of device, insertion site, health professional inserting the device, and setting for insertion, presence of other vascular devices, wound drains and urinary catheters. Vein quality was assessed as good (vein easy to visualise and easy to palpate with tourniquet on), fair (not easily visible but can palpate with tourniquet), or poor (veins small, scarred or difficult to palpate with tourniquet; may require heat pack to aid vasodilation). Skin quality was assessed as good (healthy, well hydrated, elastic), fair (mildly dehydrated, reduced elasticity), or poor (papery, dehydrated, or reduced elasticity).\n\nBODY.METHODS.ANALYTIC APPROACH:\nThe primary outcome was a composite measure of any complication causing unplanned cannula removal prior to completion of IV treatment. The composite included phlebitis, infiltration, occlusion, accidental removal, local infection, and IV device-related bloodstream infection (IVD-BSI). These were also analysed individually as secondary endpoints. A composite measure was chosen due to the low rates of these conditions individually and to the assumption that they are comparable measures of 'infusion failure'; that is, the device can no longer be used to deliver treatment. This approach has been used in previous studies on the topic [15-17]. Phlebitis was defined as two or more of pain, erythema, purulence, streak formation, or a palpable venous cord [3]. Local infection IVD-BSI (bacteremia/fungemia with at least one positive blood culture obtained from a peripheral vein, clinical manifestations of infection, and no apparent source for the bloodstream infection (BSI) except the device with or without positive tip or entry site swab culture) were defined using CDC criteria [7]. Other secondary outcomes were time in situ (hours of catheterisation from insertion to removal, both per patient and per device) [7]; IVDs per patient (number of peripheral devices inserted to complete the course of treatment) [7]; costs (calculations based on 20 minutes nursing or medical insertion time at relevant rates [15], plus the cost of the required equipment (cannula, insertion pack including dressing and solution, gloves, saline, syringe, extension tubing and starter pack for all plus fluid bag, tubing and secondary tubing for medication administration for those patients requiring this) for insertions, nursing time and equipment to routinely remove IVDs that were otherwise functional, and the costs of treating any complications that occurred (e.g., IVD-BSI). Cost calculations were undertaken from the viewpoint of the hospital using negotiated wage costs and purchasing agreements for government hospitals in the State of Tasmania. Costs would be similar for other Australian hospitals. All randomised patients were analysed by intention to treat. Each patient was randomised once and could have multiple IVDs, with subsequent IVD resites managed as per the randomised group. Relative incidence complication rates per 1,000 IVD days and 95% confidence intervals were calculated to summarise the impact of clinically indicated replacement relative to 3-day replacement. Kaplan-Meier survival curves were drawn to compare time to first IVD complication between patients in the two study groups. To assess for any potential impact of protocol violations, a per protocol analysis was also undertaken. All clinical and demographic variables were subjected to univariate testing against the primary endpoint to guide selection of possible covariates for the multivariable model. Cox proportional hazards regression modelling was used to examine the impact of age, gender, oncology status, number of comorbidities (nil, one, two, or more than two), IV gauge, site, vein quality, skin quality, oral antibiotics, IV antibiotics, wound drain, inserter occupation, initial versus subsequent IVDs, phlebitis in a preceeding IVD, haemoglobin level, parenteral nutrition, continuous versus intermittent infusion, patient risk category and study group on the outcome of time to complication events using an additive model [3,5,7,20-25]. In addition, to adjust for any inherent correlations or codependencies in the failure times of IVDs (i.e., same patient multiple failure-time data) within the Cox model, we also used the Prentice-Williams-Peterson conditional risk-set method [26]. The Mann-Whitney test was used to compare various secondary outcomes between study groups. Cost differences were calculated using arithmetic means and the t-test [27]. P values <0.05 were considered significant. All statistical data were entered and analysed using SPSS (Version 15.0; Chicago, IL) and Stata (Version 8.2; College Station, TX).\n\nBODY.RESULTS.SAMPLE:\nOver a 10-month period, 1,954 patients were screened for eligibility. Of these, 788 were eligible, with 362 (46%) recruited into the study. The most frequent exclusion criterion was altered mental state that precluded consideration of consent as assessed by the research nurse. Altered mental state was generally related to older medical patients and the immediate postoperative phase for surgical patients. Reasons for exclusion are shown in Figure 1. The 362 patients were randomised into either the routine change group (n = 177 participants, 323 devices) or the clinically indicated replacement group (n = 185 participants, 280 devices). In total 50,173 IVD hours were studied (routine change group 23,288 hours, clinically indicated group 26,885 hours). More patients in the routine change group had an active infection (53% vs. 44%) and were receiving IV antibiotics (73% vs. 64%). However, as shown in Tables 1 and 2, the two groups were generally comparable at baseline for patient- and cannula-related factors. Figure 1Participant flowchart. Table 1 Baseline characteristics of study participants Variable 3-Day Routine Change Group (n = 177) Clinically Indicated Change Group (n = 185) Sex - Male, n (%) 96 (54%) 103 (56%) Age, mean (SD) 65.1 (17.3) 62.7 (15.4) Reason for admission, n (%) • Gastrointestinal 46 (26%) 52 (28%) • Respiratory 37 (21%) 34 (18%) • Oncology 21 (12%) 21 (11%) • Orthopaedic 16 (9%) 22 (12%) • Cardiac 10 (6%) 10 (5%) • Neurological 9 (5%) 10 (5%) • Vascular 7 (4%) 7 (4%) • Renal 16 (9%) 9 (5%) • Other 15 (8%) 20 (11%) Number of comorbidities, n (%) • None 6 (3.4%) 10 (5.4%) • 1 22 (12.4%) 21 (11.4%) • 2 44 (24.9%) 54 (29.2%) • >2 105 (59.3%) 100 (54.1%) Type of surgery, n (%) • Nil 124 (70%) 132 (71%) • Gastrointestinal 22 (12%) 22 (12%) • Orthopaedic 8 (5%) 15 (8%) • Other 23 (13%) 16 (9%) Most recent Hb - Mean (SD) 124.8 (23.2) 126.2 (21.9) Table 2 Baseline infusion-related characteristics of study devices Variable 3-Day Routine Change Group (n = 323) , n (%) Clinically Indicated Change Group (n = 280) , n (%) IV cannula gauge • 22 43 (13%) 46 (16%) • 20 241 (75%) 201 (72%) • 18 27 (12%) 30 (11%) • Other 2 (1%) 3 (1%) Vein assessment • Good 143 (44.3%) 107 (38.2%) • Fair 160 (49.5%) 144 (51.4%) • Poor 20 (6.2%) 29 (10.4%) Skin integrity • Good 138 (43%) 104 (37%) • Fair 178 (55%) 164 (59%) • Poor 7 (2%) 12 (4%) Past history of phlebitis 2 (0.6%) 4 (1.4%) Insertion site • Hand 206 (64%) 187 (67%) • Forearm 59 (18%) 49 (18%) • Cubital fossa 51 (16%) 38 (14%) • Other 7 (2%) 6 (2%) Receiving infusate 241 (75%) 215 (77%) pH of infusate - Mean (SD) 6.0 (0.5) 6.0 (0.4) Receiving oral antibiotics 24 (7%) 24 (9%) Receiving IV antibiotics 236 (73%) 176 (63%) pH of IV antibiotics - Mean (SD) 6.9 (1.1) 6.9 (1.3) Receiving other IV meds 190 (59%) 179 (64%) pH of other IV meds - Mean (SD) 5.6 (2.6) 6.0 (2.8) Wound drain 50 (16%) 40 (14%) Urinary catheter 55 (17%) 41 (15%) Other vascular device 26 (8%) 24 (9%) Inserted by • Junior doctor 232 (72%) 207 (74%) • Registered Nurse 67 (21%) 57 (20%) • Senior doctor 24 (7%) 16 (6%) Where inserted • Ward 216 (67%) 188 (67%) • Emergency department 74 (23%) 70 (25%) • Other 33 (10%) 22 (8%) Current infection (site) • Respiratory 97 (30%) 49 (18%) • Urinary 24 (7%) 19 (7%) • Wound 16 (5%) 15 (5%) • Other 29 (9%) 37 (13%) • >1 site 5 (2%) 4 (1%) N/A (none) 152 (47%) 156 (56%) Risk of phlebitis (Tagar scale) • Low 174 (54%) 144 (51%) • Medium 149 (46%) 136 (49%) • High 0 0 \n\nBODY.RESULTS.EFFECT OF INTERVENTION ON PRIMARY OUTCOME:\nOutcome data were available for all patients. Table 3 shows the rates of primary and secondary outcomes. Differences in complication rates between groups were not significantly different (routine replacement 66.0 per 1,000 IVD days; clinical replacement 67.8 per 1,000 IVD days; HR 1.03; 95% CI, 0.74-1.43; P = 0.86). As shown in Figure 2, the time to first complication per patient was also not significantly different between groups (Kaplan Meier [KM] with log rank P = 0.53). On crude rate per IVD, the catheters replaced on clinical indication had higher complication rates (110/280 or 39% vs. 91/323 or 28%; P = 0.004). However, total complication rates per patient (to deliver the course of IV therapy) were not significantly different (P = 0.39) between clinically indicated (76/185, 41%) and routine resite patients (64/177, 36%). Table 3 Effect of intervention on primary and secondary endpoints Outcomes 3-Day Routine Change Group (n = 177) Clinically Indicated Change Group (n = 185) RR (95% CI) , P Value Primary: IVD complications per patient, n (%) 64 (36%) 76 (41%) RR 1.14 (0.88, 1.47), p = 0.39 IVD complications per 1000 IVD days 66.0 (95% CI 49.8, 82.1) 67.8 (95% CI 52.6, 83.1) HR 1.03 (0.74, 1.43), p = 0.86 Secondary: Phlebitis, n (%) 12 (7%) 18 (10%) RR 1.44 (0.71, 2.89), p = 0.34 Infiltration, n (%) 53 (30%) 61 (33%) RR 1.10 (0.81, 1.49), p = 0.57 Occlusion, n (%) 5 (3%) 4 (2%) RR 0.77 (0.21, 2.80), p = 0.75 Accidental removal, n (%) 11 (6%) 16 (9%) RR 1.39 (0.66, 2.92), p = 0.43 Local infection 0 0 - IVD-related BSI, n (%) 0 0 - IVD costs per patient, AUD$ mean (SD) $55.42 ($35.26) $43.35 ($26.78) Mean difference $12.07 (95%CI $5.57, $18.56), p < 0.001 Figure 2Kaplan-Meier survival curve of time to first intravenous device complication per patient (log rank, P = 0.53). Patient- and device-related variables considered in the multivariable model were older age, number of comorbidities (nil, one, two or more than two), smaller cannula size, poor skin or vein integrity, IV antibiotics, insertion by medical staff and study group. None of these were found to be statistically significant. The final Cox proportional hazards model after adjusting for time found study group was not a significant factor (HR 1.02; 95% CI, 0.77-1.36; P = 0.89). Variance-adjustment testing for potential multiple-failures per patient (cannula data) found no inconsistency in significant results compared to the main Cox model.\n\nBODY.RESULTS.PROTOCOL COMPLIANCE:\nCompliance with the study intervention was achieved in 78% (251 of 323) of routinely replaced IVDs removed at Day 3 and 100% (280 of 280) of IVDs resited on clinical indication. Noncompliance in the routine resite group was generally related to high staff workloads on that day or to staff's perception that the patient was soon to be discharged or have therapy completed, and so the IVD remained in situ beyond Day 3. A per protocol analysis was performed including only those patients in the routine replacement group whose IVDs were all actually removed by 96 hours (n = 251). This found no significant difference (KM with log rank P = 0.16) in the rate of complications between groups (routine replacement 92 per 1,000 IVD days vs. clinically indicated 68 per 1,000 IVD days).\n\nBODY.RESULTS.EFFECT OF INTERVENTION ON SECONDARY OUTCOMES:\nThere was no statistically significant difference in group outcomes for any phlebitis (P = 0.34), infiltration (P = 0.57), occlusion (P = 0.75), or accidental removal (P = 0.43). No cases of local infection or IVD-related bloodstream infection occurred in either group. For overall IVD therapy (total cannulation time per patient for all devices), clinically indicated devices had a median therapy of 120 hours (quartiles 86.5 and 172.5 hours), and those replaced routinely had median therapy of 113 hours (quartiles 72 and 172 hours) (P = 0.21). For individual IVD dwell times, the clinically indicated group devices had a median dwell time of 85 hours (quartiles 51 and 121 hours), and those replaced routinely had a median dwell time of 71 hours (quartiles 48 and 90 hours) (P < 0.001). The maximum IVD dwell time was 1,023 hours (43 days) in the clinical replacement group, and this cannula was still working well on removal for completion of therapy. The overall number of IVDs per patient was significantly less (P = 0.004) for those replaced on clinical indication (mean 1.5, SD 0.8, median 1, quartiles 1 and 2) than for those routinely replaced (mean 1.8, SD 1.1, median 1, quartiles 1 and 2). A total of 22% of patients in the routinely replaced group had three or more IVDs compared with 9% in the clinical indication group. A total of 82 (28%) IVDs in the routine replacement group were resited after 3 days despite functioning well and ongoing treatment being required. (The remainder removed at this time were no longer required and so not resited, or infusion failure had already occurred.) Mean hospital costs per patient for the course of IV therapy were significantly higher (P < 0.001) for those managed with routine resite (mean $55.42, SD $35.26) compared with resite on clinical indication (mean $43.35, SD $26.78).\n\nBODY.DISCUSSION:\nThe finding that 3-day routine resite was not an effective intervention was consistent across the intention-to-treat primary analysis and the per protocol analysis. There remained no effect when events were expressed per patient or per 1,000 IVD days. Neither composite nor individual complication rates differed between groups, and there were no cases of local or device-related bloodstream infection. It appears safe and practical to leave IVDs in situ as long as they are functioning well and are still needed for clinical treatments. All IVDs will fail eventually, but this study shows that artificially shortening the lifespan of individual catheters does not reduce the overall complication rates over the course of therapy. Our results indicate that the average duration of IV therapy is 5-6 days and that many catheters can remain complication-free for this period. If catheters are not routinely resited, the median dwell time would remain within the 72-96 hours recommended by the CDC, but about 10% would remain in situ for longer (in this study up to 43 days with no complications). Our data show that a policy of resite on clinical indication would mean that one of every two patients would need a single cannula to receive treatment, whereas a 3-day change policy would result in only one in five patients having this scenario, with the rest requiring multiple cannulations and therefore additional pain and inconvenience. The results are consistent with the findings of recent RCTs in both hospitals and the community that have found no benefit in routinely resiting IVDs every 3 days [15-17]. In these studies, many cannulae were inserted by an expert IVD team [15-17], which may have minimised complications in both groups [19]. Our study confirms and extends these findings into the general medical/surgical setting without an IV team where IVDs were inserted by a variety of nursing and medical staff. Data from this study were included in a 2010 Cochrane Collaboration systematic review and meta-analysis on the topic [18]. This review included six trials (n = 3,455) and reported no clinically important or statistically significant difference in catheter-related bloodstream infection or phlebitis between IVDs that were routinely resited (at 48-96 hours) or resited on clinical indication, yet there were significantly lower costs in the group resited on clinical indication [18]. The belief that routine resite of IVDs will prevent complications appears to stem from early observational studies that noted longer-dwelling IVDs had more complications than shorter-dwelling IVDs [9,10]. This is intuitively true, given, for example, that an IVD in situ for 6 days has twice the time exposure of risk than an IVD in situ for 3 days. However, this does not prove that two sequentially inserted IVDs (in the same patient), both used for 3 days, have a combined lower risk over time than the 6-day dwell device. Indeed, this and other recent trials strongly suggest that the risk for the patient over the 6-day period is similar. Well-designed cohort studies with modern catheter materials suggest that the daily risk of phlebitis is relatively stable after the first 24 or 48 hours [3,21,28-31]. The peak in phlebitis between 24 and 48 hours is likely associated with the time taken by the body to mount a biological response after the instigation of therapy; those most likely to develop phlebitis will do so at this time. The results support the extension of the use of peripheral IVDs beyond the 72-96 hours currently recommended by the CDC [7]. There is incongruity in the CDC recommendations; they recommend not to routinely resite IVDs in children or in those with limited venous access. If it is safe in these populations, it is unclear why it would be necessary to routinely resite adults or those with better veins. Higher-risk cannulae such as central venous devices are no longer recommended by the CDC for routine replacement, because trials showed this was not of benefit [7]. Our study also confirms that the CDC guidelines are not always complied with; one fifth of IVDs in the routine change group were not replaced by this time. However, the per protocol analysis showed that the intervention remained ineffective even for those who truly had their IVDs resited every 3 days. Limitations of the study included a 9% higher frequency of IV antibiotics and concurrent infection in the routine resite group. This may have put the group at higher risk due to vein irritation, or conversely it protected against bacterial entry. Neither variable was significant in the multivariable model. The unblinded study design was unavoidable, but also a limitation. Our use of clear outcome measures, a full-time research nurse and laboratory staff blinded to culture assessments should have reduced the risk for potential bias. Resource constraints prematurely ended recruitment, thus reducing the anticipated power of the study from 90% to 80%. Routine IVD resite involves pain for patients, staff procedural time, equipment costs and environmental waste. Contemporary evidence suggests the current policy for routine resite by 72-96 hours is ineffective and should be replaced with a 'resite on clinical indication' policy. It remains imperative that clinical staff monitor IVDs closely and that a daily review of the need for continuing therapy be made, with cessation as soon as possible; the only no-risk IVD is no IVD. Of the 4.3 million acute hospital admissions in Australia each year (excluding day cases), over half have IV treatment [15,32]. Conservatively, if even 2 million courses of IV therapy were managed with clinically indicated rather than routine resite, this would save the unnecessary insertion of approximately 660,000 IVDs and free 280,000 hours of staff insertion time. Assuming our costs are similar to those in other Australian hospitals, a change to resite on clinical indication would save approximately AUD$24 million nationally each year.\n\nBODY.CONCLUSIONS:\nAlthough larger, multisite trials are required, evidence to date suggests that routine resite of peripheral IVDs increases patient discomfort and healthcare costs, but does not reduce IVD complications as has traditionally been thought.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nCMR designed the study, applied for funding and drafted the manuscript. DM participated in study design and manuscript preparation, and managed the study. JM recruited patients, ensured protocol integrity and collected data. MRM undertook statistical analyses and assisted with drafting the manuscript. All authors read and approved the final manuscript.\n\nBODY.AUTHORS' INFORMATION:\nCMR, PhD is Professor, Research Centre for Clinical and Community Practice Innovation, Griffith University, Australia. DM holds a Masters by Research and is Senior Lecturer, University of Tasmania, Australia. JM is an RN and Nurse Unit Manager, Calvary Tasmania, Australia. MRM is a PhD qualified Biostatistician, Gippsland Medical School, Monash University, Australia.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1741-7015/8/53/prepub\n\n**Question:** Compared to routine replacement what was the result of clinically indicated on hospital costs per patient for the course of IV therapy?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
246
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Short-term comparative study of high frequency chest wall oscillation and European airway clearance techniques in patients with cystic fibrosis\n\n ABSTRACT.BACKGROUND:\nHigh frequency chest wall oscillation (HFCWO) is standard treatment for airway clearance in the USA and has recently been introduced in the UK and Europe. There is little published research comparing HFCWO with airway clearance techniques (ACTs) frequently used in the UK and Europe. The aim of this study was to compare the short-term effects of HFCWO with usual ACTs in patients with cystic fibrosis hospitalised with an infective pulmonary exacerbation.\n\nABSTRACT.METHODS:\nA 4-day randomised crossover design was used. Patients received either HFCWO on days 1 and 3 and usual ACTs on days 2 and 4 or vice versa. Wet weight of sputum, spirometry and oxygen saturation were measured. Perceived efficacy, comfort, incidence of urinary leakage and preference were assessed. Data were analysed by mixed model analysis.\n\nABSTRACT.RESULTS:\n29 patients (72% male) of mean (SD) age 29.4 (8.4) years and mean (SD) forced expiratory volume in 1 s (FEV1) percentage predicted (FEV1%) 38 (16.7) completed the study. Significantly more sputum was expectorated during a single treatment session and over a 24 h period (mean difference 4.4 g and 6.9 g, respectively) with usual ACTs than with HFCWO (p<0.001). No statistically significant change in FEV1% or oxygen saturation was observed after either HFCWO or usual ACTs compared with baseline. 17 patients (55%) expressed a preference for their usual ACT.\n\nABSTRACT.CONCLUSIONS:\nDuring both a finite treatment period and over 24 h, less sputum was cleared using HFCWO than usual ACT. HFCWO does not appear to cause any adverse physiological effects and may influence adherence.\n\nBODY.INTRODUCTION:\nThe last four decades have seen improved survival in patients with cystic fibrosis (CF). The latest figures estimate the median life expectancy of individuals with CF born in 2000–3 to be approximately 40 years.1 Respiratory failure is the major cause of morbidity and mortality in those with CF.2 Accumulation of secretions in the CF airway precipitates chronic infection, causing a progressive deterioration in lung function and eventually respiratory failure and death.3 Airway clearance techniques (ACTs) are an essential component of the management of patients with CF and are considered standard care.4 There is a range of ACTs which augment the normal mucus clearance mechanisms of the lung to facilitate expectoration. These techniques generally aim to promote secretion clearance by altering airflow and mucus viscosity. Typically, high frequency chest wall oscillation (HFCWO) produces compression of the chest wall via an inflatable jacket linked to an air pulse generator. The generator delivers an intermittent flow of air into the jacket which rapidly compresses and releases the chest wall at a variety of frequencies. Consequently, an oscillation in airflow within the airways is achieved. HFCWO has been shown to enhance central and peripheral mucus clearance.5 A number of underlying mechanisms have been hypothesised including increased airflow-mucus interaction causing a reduction in viscolelasticity, production of an expiratory airflow bias which promotes a cephalad movement of mucus and the enhancement and stimulation of ciliary activity.6 7 Published short-term evidence has demonstrated increased sputum clearance and improved pulmonary function with HFCWO compared with no treatment.8 This and other studies have also compared HFCWO with alternative ACTs. Some have demonstrated increased sputum clearance with HFCWO compared with postural drainage and percussion (PD&P).9–11 Others found no significant difference in sputum clearance between HFCWO and PD&P,8 12 13 positive expiratory pressure (PEP),8 high frequency oral oscillation13 or intrapulmonary percussive ventilation.14 Furthermore, some of these studies found no differences in efficacy related to pulmonary function between HFCWO and PD&P,8 12 13 PEP,8 15 oscillating positive expiratory pressure (Flutter, manufactured by Axcan Scandipharm Inc, Birmingham, AL, USA)16 and high frequency oral oscillation.13 One study reported an improved outcome in forced expiratory volume in 1 s (FEV1) in the longer term using HFCWO compared with PD&P.17 Few trials have compared HFCWO with alternative ACTs which are commonly used in the UK and Europe by individuals with CF. Phillips et al18 compared HFCWO (using the Hayek Cuirass) with the active cycle of breathing techniques (ACBT) in hospitalised paediatric patients. Significantly more sputum was cleared with the ACBT. The authors concluded that HFCWO was not an effective airway clearance treatment for children with CF. It is difficult to compare this study directly with those above, as the Hayek Cuirass machine has a different operating mode to the inflatable vest system. A series of Cochrane systematic reviews have found that no one ACT is superior in terms of respiratory function and efficacy.19 Of a number of published studies on HFCWO, only two were deemed of sufficient quality to be included in these randomised control trial systematic reviews.8 12 HFCWO is widely used in the USA where it is considered standard care in CF.20 It has recently been introduced to the UK and Europe, where the mainstay of care for airway clearance in CF is the ACBT, autogenic drainage (AD) and other airway clearance regimens using small devices.4 In the USA these techniques and devices tend to be considered adjuncts, with PD&P and HFCWO remaining the most common ACTs.20 While there is a body of evidence which equates HFCWO and PD&P, there is a need for further trials to compare HFCWO with alternative ACTs to provide a more relevant evidence base for HFCWO in the UK and Europe. The aim of the present study was to compare the short-term effects of HFCWO with patients' usual ACTs in those with CF admitted to hospital with an acute exacerbation of pulmonary infection. The hypothesis was that HFCWO was superior to patients' usual ACTs.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nAll patients admitted to hospital who met the entry criteria were invited to participate in the study. The inclusion criteria were a diagnosis of CF (established by genotype or sweat sodium >70 mmol/l or sweat chloride of >60 mmol/l), FEV1 ≥ 20% predicted, age ≥16 years and an infective pulmonary exacerbation as defined by Thornton et al.21 Exclusion criteria were current severe haemoptysis, rib fractures, pregnancy, inability to give consent and those whose usual ACT was HFCWO. Informed written consent was obtained for all patients and the study was approved by Brompton Harefield and National Heart and Lung Institute research ethics committee.\n\nBODY.METHODS.STUDY DESIGN:\nA randomised crossover design was used to compare HFCWO with patients' usual ACTs, which allowed within-patient variability to be controlled. Over four consecutive days, patients received either HFCWO therapy on days 1 and 3 and their normal ACT on days 2 and 4 or vice versa. Allocation to HFCWO or usual ACT on day 1 was determined using a computer-generated randomisation table.\n\nBODY.METHODS.PROTOCOLS:\nPatients performed their usual ACT or received HFCWO two times daily at the same time. Before starting the study, each patient's usual ACT was reviewed by an experienced senior respiratory physiotherapist. In addition, patients were familiarised with HFCWO (The Vest). This involved the patients using The Vest for a trial period the day before the start of the study, during which time they were given the opportunity to experience all three protocol frequencies at a variety of pressures. Each airway clearance treatment session lasted 30 min and was supervised by the same physiotherapist to ensure optimisation and standardisation of usual ACT and HFCWO performance. All nebulised and inhaled medications were taken before each treatment session in accordance with the patients' individual regimens.\n\nBODY.METHODS.HIGH FREQUENCY CHEST WALL OSCILLATION:\nThe following regimen was identified as current best practice following an in-depth review of the literature and discussion with clinical experts in the USA. Using The Vest Airway Clearance System Model 4 (Hill-Rom UK Ltd, Leicestershire, UK), each patient was fitted with an appropriately-sized, full torso, inflatable, disposable vest connected to the air pulse generator via two flexible tubes. Patients remained in an upright sitting position throughout the 30 min treatment session. HFCWO was applied for 8 min at each of three frequencies in sequence (10, 13 and 15 Hz) with each frequency followed by a 2 min rest period. The pulse pressure was set according to individual patient's reported comfort at all three frequency settings. During both the HFCWO and rest periods, patients were instructed to huff or cough as they felt necessary in order to expectorate loosened bronchial secretions.\n\nBODY.METHODS.USUAL ACT:\nUsual ACTs were in accordance with the guidelines of the International Physiotherapy Group for Cystic Fibrosis.22 Patients performed their usual ACT for 30 min, and for patients practising an assisted ACT, the physiotherapist provided percussion. Patients were allowed to perform combined ACTs where this was their usual practice. This reflected current international practice more accurately and recommendations that ACTs be adapted on an individual basis.23\n\nBODY.METHODS.OUTCOME MEASURES:\nThe primary outcome measure was wet weight of sputum expectorated during a treatment session. Patients were instructed to expectorate all sputum into a preweighed pot during and for 30 min following each treatment session. They were also instructed to collect sputum expectorated at all other times during each 24 h period. All sputum collected was weighed immediately following collection on weighing scales with an accuracy of 0.01 g (BL310; Sartorious UK Ltd, Epsom, UK). FEV1 was measured using a hand-held spirometer (2120; Vitalograph Ltd, Buckingham, UK) in accordance with internationally agreed standards.24 Measurements were taken immediately before and after a 30 min period following each treatment session. Data were analysed using Spirotrac IV Version 4.30 software (Vitalograph Ltd). Pulsed arterial oxygen saturation (SpO2) was measured transcutaneously at rest, for 5 min immediately before, 30 min during and 30 min immediately following each session. SpO2 was measured with a fingertip pulse oximeter (Konica-Minolta Pulsox-300i; Stowood Scientific Instruments, Oxford, UK). The data were analysed using Download 2001 Version 2.8.0 software (Stowood Scientific Instruments Ltd). The perceived efficacy and comfort of each ACT and the incidence of urinary leakage during treatment were measured using 10 cm visual analogue scales (VAS). Each day, after the last treatment session, patients completed three 10 cm VAS with reference to the ACT used that day. On the VAS used, 0 represented not at all effective/comfortable or no urinary leakage and 10 represented extremely effective/comfortable or a lot of urinary leakage. On the fourth and final day, participants were also asked to indicate which ACT they would prefer. An independent observer, blind to the daily method of airway clearance used, performed the spirometry, weighed the sputum samples and collected the 10 cm VAS throughout the study.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nA sample size calculation determined the number of patients required to test for superiority of HFCWO. This was based on a difference of 4 g of sputum between the usual ACT and HFCWO during a single treatment session. A square-root of within SEM of 4 g at the 5% significance level would require 24 patients to achieve 90% power. Data are presented as mean (SD), median (IQR) or n (%) as appropriate. Continuous variables were analysed using a mixed-effects linear regression model. This was to allow the results to be adjusted for a number of factors which are inherent to the design of a crossover trial. The order of treatment randomisation and the day and time of treatment were all entered into the model and their effect on the outcomes was tested. For this trial, the results were also adjusted for the method and position of treatment in the ACT session. In these models, patients were entered as random effects since it was not of interest to quantify the differences between individual patients, but it was important to account for the repeated measurements on each patient. The estimates of the fixed effect of ACT versus HFCWO are presented as mean (95% CI). A p value of 0.05 was taken to be statistically significant. All analyses were conducted using Stata 9.2 (StataCorp).\n\nBODY.RESULTS.PARTICIPANTS:\nFifty patients were invited to participate in the study, 20 declined, 29 patients completed the study and 1 patient was withdrawn due to a hypoglycaemic episode. Table 1 shows demographic and baseline characteristics of the patients who completed the study. Table 1 Patient demographics and baseline characteristics (n=29) Age (years) 29.4 (8.4) Male (%) 21 (72%) Height (cm) 171 (9) Weight (kg) 60 (11) BMI (kg/m 2 ) 20.4 (2.6) FEV 1 (l) 1.46 (0.72) FEV 1 % predicted 38 (16.7) SpO 2 (%) 94.3 (2.1) Data are presented as mean (SD) or n (%) as appropriate. BMI, body mass index; FEV 1 , forced expiratory volume in 1 s; SpO 2 , pulsed arterial oxygen saturation. Twenty-nine patients were treated with intravenous antibiotics as part of their medical management. All participants received two treatment sessions on each study day and all treatment sessions were 30 min in duration. The mean (SD) length of stay for patients was 14 (5) days and the mean day of entry to the study was day 8 (3) days.\n\nBODY.RESULTS.USUAL ACT:\nUsual ACTs included the ACBT with modified PD&P (41%; n=12) and with modified PD alone (7%; n=2), AD in sitting (28%; n=8) and with modified PD (7%; n=2), PEP (7%; n=2) and Flutter (10%; n=3).\n\nBODY.RESULTS.SPUTUM WEIGHT:\nThe wet weight of sputum expectorated with usual ACT compared with HFCWO is shown in table 2. The mean weight of sputum expectorated during a single treatment session and over a 24 h period was significantly greater with usual ACT than with HFCWO. The mean difference in wet weight of sputum expectorated during a treatment session was 4.4 g (p<0.001) and the mean difference in wet weight of sputum expectorated over a 24 h period was 6.9 g (p<0.001). These findings were not affected by order, time, day or position of treatment. Table 2 Wet weight of sputum expectorated: HFCWO compared with usual ACT Period of sputum collection Expectorated sputum wet weight (g) Usual ACT HFCWO Mean difference p Value Mean (SD) Median (IQR) Mean (SD) Median (IQR) Single treatment session 9.1 (7.9) 7.2 (3.0–14.2) 4.6 (4.1) 3.4 (1.5–6.7) 4.4 (3.5 to 5.4) <0.001 24 h (excluding treatment) * 22.4 (26.8) 12.9 (4.0–29.9) 24.9 (25.8) 15.3 (3.9–40.1) −1.5 (−4.6 to 1.6) 0.352 24 h (including treatment) 39.8 (36.3) 25.5 (14.0–57.1) 34.3 (30.7) 26.3 (12.1–46.0) 6.9 (3.1 to 10.8) <0.001 Data are presented as mean (SD) or (95% CI) or median (IQR) as appropriate. Data are adjusted for randomisation, day, time and position of treatment using a mixed effects linear regression model. * Of 116 24-h sputum samples collected, two were discarded as they were incomplete. ACT, airway clearance technique; HFCWO, high frequency chest wall oscillation. No statistically significant difference was observed in the amount of sputum expectorated when using HFCWO or usual ACT between treatments in a 24 h period.\n\nBODY.RESULTS.PHYSIOLOGICAL MEASURES:\nFEV1 and SpO2 measured before, during and after usual ACT and HFCWO treatment sessions are shown in table 3. Table 3 Forced expiratory volume in 1 s and pulsed arterial oxygen saturation at baseline, during and after treatment with usual airway clearance technique and HFCWO Usual ACT HFCWO Baseline During treatment 30 min after treatment Baseline During treatment 30 min after treatment FEV 1 % predicted 39.1 (16.9) NA 38.9 (17.1) 38.9 (16.8) NA 39.2 (16.7) SpO 2 (%) 94.4 (2.0) 94.4 (1.9) 93.9 (1.6) 94.5 (1.8) 95.0 (1.7) 94.3 (1.7) Data are presented as mean (SD). ACT, airway clearance technique; FEV 1 , forced expiratory volume in 1 s; HFCWO, high frequency chest wall oscillation; NA, not applicable; SpO 2 , pulsed arterial oxygen saturation. \n\nBODY.RESULTS.COMFORT, EFFICACY AND PREFERENCE:\nThe VAS scores for comfort, efficacy and urinary leakage during usual ACT compared with HFCWO are shown in table 4. No significant differences were observed in VAS scores for comfort or urinary leakage between HFCWO and usual ACT. Patients scored the efficacy of their usual ACT significantly higher than for HFCWO (mean difference 14 mm; p=0.002). This was not affected by the order or day of treatment. Of those patients who completed the study, 17 (55%) expressed a preference for their usual ACT over HFCWO. Preference was not predicted by the amount of sputum expectorated. Table 4 Comfort, efficacy and urinary leakage: HFCWO compared with usual ACT Self reported measure Visual analogue scale score (mm) Usual ACT HFCWO Mean difference p Value Comfort 69 (23) 70 (22) −1 (−9 to 7) 0.784 Efficacy 68 (21) 54 (26) 14 (6 to 23) 0.002 Urinary leakage 0 (1) 0 (1) −0.05 (−0.3 to 0.4) 0.791 Data are presented as mean (SD) or (95% CI) as appropriate. Data are adjusted for randomisation, day of treatment and time of treatment using a mixed effects linear regression model. ACT, airway clearance technique; HFCWO, high frequency chest wall oscillation. \n\nBODY.DISCUSSION:\nThere have been few published comparisons between HFCWO using a vest system with the ACTs of the ACBT and AD. This short-term study, carried out in individuals with CF admitted to hospital with an acute infective pulmonary exacerbation, showed that significantly more sputum was expectorated during a single treatment session and over a 24 h period using the patient's usual ACT than with HFCWO. In addition, slightly less sputum was expectorated at all other times (excluding treatment sessions) on usual ACT days compared with HFCWO, but this trend was not statistically significant. These findings were independent of order, time or day and position of treatment. Neither HFCWO nor any of the usual ACTs were associated with any adverse clinical events. A possible factor contributing to the difference in sputum clearance between HFCWO and usual ACT may have been the number and frequency of forced expiratory manoeuvres (FEMs) and the more gentle expiratory manoeuvres of the AD breath that were performed with the usual ACTs. Some studies have standardised the number of coughs and FEMs that patients performed; however, at the time of designing the protocol, the aim was to compare the regimens as currently practised internationally and the frequency of coughs and FEMs was neither standardised nor counted. In retrospect, it would have been of value to have counted the number of coughs and FEMs undertaken during each regimen, but it had not been anticipated that any differences between HFCWO and usual ACTs may be a consequence of the number of FEMs or AD breaths. Theoretically, during the three 8 min periods of HFCWO, fewer FEMs, coughs or AD breaths would be undertaken than during an equivalent period of the ACBT, AD, PEP or Flutter (all of which inherently include FEMs or AD breaths at regular intervals). This difference was supported by observations of the investigators. However, manual cough counts are subject to observer error. Objective cough monitoring using the Leicester Cough Monitor has only recently been validated and should be considered for use in further studies.25 Components of patient satisfaction include efficacy, comfort and convenience. Some studies have formally evaluated patient satisfaction and compliance. One study reported that 50% of subjects chose HFCWO compared with the Flutter, and efficacy was the most frequently cited reason for this choice.16 A later study reported that HFCWO was not preferred over PD&P and intrapulmonary percussive ventilation; furthermore, there was no significant correlation between treatment preference and sputum weight.14 The current study found that patients perceived the efficacy of HFCWO to be statistically significantly less than that of their usual ACT. However, nearly half (45%) of patients expressed a preference for HFCWO. Preference may have been affected by the novelty of a new treatment and it is unknown whether this would continue in the long term. There is no one recommended protocol for the application of HFCWO in the literature. Published studies describe differing numbers and duration of frequencies, length of treatment and airway clearance. Frequencies of 10, 13 and 15 Hz were chosen as it has been reported that maximum mucus transport occurs between 11 and 15 Hz with a peak at 13 Hz.5 6 In addition, the highest oscillated tidal volume flow (peak airflow) occurred between 10 and15 Hz in patients with CF.26 More recent research recommends an individual \"tuning\" method to identify optimum treatment frequencies. These have been shown to vary among individuals and the oscillation waveform,27 but it is unknown whether \"tuning\" increases efficacy. It is possible that a practice effect could have occurred as all patients were new to HFCWO. However, the protocol used in this study did not require the patient to perform any newly learnt physical technique. In addition, the statistical analysis ensured data were adjusted for day of treatment and found no effect. Alternatively, patients' familiarity with their usual ACT may also have had an effect on outcomes. This study was powered to detect a difference of 4 g of sputum expectorated during a single treatment session. Other studies have been based on a difference of 3–3.5 g, which is generally accepted as a clinically important difference.28 29 Wet weight sputum was felt to be an appropriate primary outcome measure in this short-term study in an acute environment. Previous work has found wet weight to be proportional to dry weight sputum.28 30 Emerging non-invasive means of measuring airway clearance may be more sensitive indicators in the future (eg, lung clearance index and electrical impedance tomography). Considering the cost benefit of HFCWO compared with other ACTs and the differing healthcare systems in the USA and the UK, it is unlikely that HFCWO will become the first choice ACT for most individuals in the UK. Further work needs to be undertaken to identify the place of HFCWO in Europe. Patient preference for a treatment regimen may positively influence adherence to treatment in the short term, and nearly half the patients who participated in this study preferred HFCWO to their usual ACT. HFCWO is a safe treatment that facilitates airway clearance in CF but, when compared with patients' usual ACTs, HFCWO led to the clearance of significantly less sputum during a single treatment session and over a 24 h period.\n\n**Question:** Compared to Usual airway clearance techniques what was the result of High frequency chest wall oscillation on Sputum expectoration over a 24-hour period?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
284
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomised phase II evaluation of irinotecan plus high-dose 5-fluorouracil and leucovorin (ILF) \n\n ABSTRACT:\nAn open-label randomised comparison of efficacy and tolerability of irinotecan plus high-dose 5-fluorouracil (5-FU) and leucovorin (LV) (ILF) with etoposide plus 5-FU/LV (ELF) in patients with untreated metastatic or locally advanced gastric cancer. One cycle of ILF comprised six once-weekly infusions of irinotecan 80 mg m−2, LV 500 mg m−2, 24-h 5-FU 2000 mg m−2, and ELF comprised three once-daily doses of etoposide 120 mg m−2, LV 300 mg m−2, 5-FU 500 mg m−2. In all, 56 patients received ILF and 58 ELF. Median age was 62 years, Karnofsky performance 90%, and disease status was comparable for both arms. The objective clinical response rates after 14 weeks treatment (primary end point) were 30% for ILF and 17% for ELF (risk ratio (RR) 0.57, 95% confidence interval (CI) 0.29–1.13, P=0.0766). Overall response rates over the entire treatment period for ILF and ELF were 43 and 24%, respectively (RR 0.56, 95% CI 0.33–0.97; P=0.0467). For ILF and ELF, respectively, median progression-free survival was 4.5 vs 2.3 months, time to treatment failure was 3.6 vs 2.2 months (P=0.4542), and overall survival was 10.8 vs 8.3 months (P=0.2818). Both regimens were well tolerated, the main grade 3/4 toxicities being diarrhoea (18%, ILF) and neutropenia (57%, ELF). The data from this randomised phase II study indicate that ILF provides a better response rate than ELF, and that ILF should be investigated further for the treatment of metastatic gastric cancer.\n\nBODY:\nGastric cancer is the fourth most common cancer in Europe and the third leading cause of cancer mortality (Bray et al, 2002). Although gastric cancer has declined over the past 50 years, the incidence of tumours at the gastro-oesophageal junction has increased (Blot et al, 1991). The use of chemotherapy for the management of patients with advanced gastric cancer, who have limited treatment options (Hohenberger and Gretschel, 2003), has only become widely acceptable over the last 20 years (Glimelius et al, 1997; Murad et al, 1993; Pyrhonen et al, 1995). 5-Fluorouracil (5-FU), usually in combination with leucovorin (LV, also referred to as folinic acid), forms the basis of most chemotherapy regimens used for the treatment of gastric cancer. A randomised phase III trial compared combinations of 5-FU with other active drugs in advanced gastric cancer: etoposide, LV and 5-FU (ELF) vs infusional 5-FU plus cisplatin (FUP) vs 5-FU, doxorubicin and methotrexate (FAMTX) (Vanhoefer et al, 2000). The overall response rates (ORRs) ranged from 9% (ELF) to 20% (FUP) and median survival times were between 6.7 months (FAMTX) and 7.2 months (both ELF and FUP). The observed differences were not statistically significant and there is still no definitive regimen for the treatment of gastric cancer. The combination of epirubicin, cisplatin and continuous infusion 5-FU (ECF) has been proposed as a standard first-line therapy for gastric cancer as a consequence of its significantly improved response rate (46%) and survival (8.7 months) when compared with FAMTX (Waters et al, 1999). More recently, ELF has been shown to provide better disease control (complete response (CR)+partial response (PR)+stable disease (SD)) for patients with proximal rather than distal tumours (85 vs 48%, P=0.04) (Schulze-Bergkamen et al, 2002). Tolerability, toxicity and ease of administration have become major determinants for selecting an appropriate therapy and ELF has emerged as a convenient, well-tolerated regimen that can be administered on an outpatient basis (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002). Irinotecan (CPT-11, Camptosar; Pfizer Oncology, New York, USA) inhibits topoisomerase I thereby disrupting DNA replication and cell division within tumour cells. Response rates between 20 and 23% have been reported for irinotecan monotherapy in untreated gastric cancer (Futatsuki et al, 1994; Kohne et al, 2003). In patients who had failed previous therapy, irinotecan (180 mg m−2) combined with 5-FU (400 mg m−2, bolus) and LV (125 mg m−2) followed by 5-FU infusion (1200 mg m−2 over 48 h) yielded a response rate of 29%, while a further 34% of patients achieved SD (Assersohn et al, 2004). Irinotecan with bolus-LV/5-FU in the first-line treatment of gastric cancer provided a response rate of 22% (Blanke et al, 2001). However, this regimen (irinotecan 125 mg m−2, LV 20 mg m−2 plus 5-FU 500 mg m−2, all given weekly for 4 weeks followed by a 2-week rest) was associated with a high incidence of severe diarrhoea (28%) and neutropenia (36%) infection leading to substantial dose modifications (Blanke et al, 2001). By comparison, the combination of irinotecan with continuous rather than bolus infusions of LV/5-FU exhibited a lower incidence of grade 3 and 4 toxicities in colorectal cancer patients (Douillard et al, 2000; Saltz et al, 2000; Bouzid et al, 2003). Therefore, we have investigated a weekly dose of irinotecan (80 mg m−2) in combination with LV (500 mg m−2) and continuous 5-FU (2000 mg m−2 over 24 h) according to the AIO (Arbeitsgemeinschaft Internistische Onkologie) regimen (i.e. ILF) in gastric cancer patients. In a previous phase I study of ILF in the first- and second-line treatment of gastric cancer, we observed a response rate of 20% with a further 36% of patients reporting SD (Moehler et al, 2003). Importantly, toxicity was sufficiently manageable to allow outpatient-based treatment. Therefore, we initiated the present randomised, controlled, phase II study to compare the efficacy and safety of ILF with ELF in the first-line treatment of metastatic gastric cancer.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nEligible patients had untreated histologically proven gastric adenocarcinoma, or adenocarcinoma of the oesophagogastric junction with measurable metastatic disease and/or locally recurrent nodal involvement, were aged between 18 and 75 years with a Karnofsky performance score (KPS) ⩾60 and a life expectancy >12 weeks. Patients were required to have adequate haematological (neutrophils ⩾2.0 × 109 l−1, platelets ⩾150 × 109 l−1; haemoglobin ⩾10g dl−1), hepatic (total bilirubin ⩽1.25 × upper normal limit (UNL); aspartate (AST) and alanine (ALT) aminotransferases ⩽3 × UNL) and renal function (creatinine <1.25 × UNL). Patients with previous cancer therapies were excluded from the study. All patients provided signed and dated consent before entering the trial. The study was conducted in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice guidelines and the protocol was initially approved by the Ethics committee of Aerztekammer Rheinland-Pfalz and later by all Ethics committees responsible for participating centres.\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN AND RANDOMISATION:\nThis was an open-label, multicentre, phase II randomised trial with two treatment arms. Patients were randomly assigned and stratified according to centre, peritoneal involvement (yes/no) and prior gastrectomy (yes/no). The randomisation process was centralised and performed by the Coordination Centre for Clinical Trials (KKS), Mainz, Germany.\n\nBODY.PATIENTS AND METHODS.ADMINISTRATION OF STUDY DRUGS AND DOSE ADJUSTMENT:\nPatients assigned to ILF (Arm A) received irinotecan 80 mg m−2 intravenously (i.v.) over 60–90 min followed by LV 500 mg m−2 i.v. over 60 min and then 5-FU 2000 mg m−2 i.v. over 24 h, on day 1. Each cycle comprised six once-weekly treatments followed by a 13-day rest period. Systemic prophylactic atropine (0.25 mg) injections for irinotecan-related acute cholinergic symptoms were allowed for the first cycle but not recommended. Prophylactic treatment for delayed diarrhoea was not permitted. However, patients were carefully informed of the potential risk of delayed diarrhoea and neutropenia and the need for early intervention with loperamide, metoclopramide, antibiotics, or hospitalisation and parenteral rehydration in case of refractory diarrhoea (>48 h). Antiemetic treatment was performed using metoclopramide or HT-3 antagonists in a sequential manner. The prophylactic use of colony-stimulating factors was not permitted. Patients assigned to ELF (Arm B) received etoposide 120 mg m−2 i.v. over 60 min, LV 300 mg m−2 i.v. over 5–10 min and then 5-FU 500 mg m−2 bolus i.v. over 2–4 min, on day 1. Each cycle comprised three applications on consecutive days (1–3) followed by an 18-day rest. All study treatments were administered until disease progression, unacceptable toxicity or withdrawal of consent. In the event of toxicity (defined by the National Cancer Institute of Canada expanded common toxicity criteria; NCIC-CTC), treatment delays or dose reductions could be applied as follows. If at any time during a cycle there were moderate reductions in haematological function (neutrophil count 0.5–1.5 × 109 l−1, platelet count 25–75 × 109 l−1) or moderate diarrhoea or stomatitis (>grade 1), the next administration could be delayed for up to 2 weeks. If at any time haematological abnormalities were noted (neutrophils <0.5 × 109 l−1, neutrophils <1 × 109 l−1 with infection or fever, platelets <25 × 109 l−1) or if there were ⩾grade 3 or 4 diarrhoea or stomatitis, treatment had to be delayed until recovery to moderate levels (as described above) after which the following dose reductions were applied. For Arm A, 5-FU was reduced to 1600 mg m−2 and irinotecan to 65 mg m−2. For Arm B, in the case of haematological toxicity, etoposide had to be reduced to 100 mg m−2 and 5-FU to 400 mg m−2, and in the case of diarrhoea or stomatitis, 5-FU was reduced to 400 mg m−2. If a condition persisted despite dose reduction, or if a patient experienced myocardial infarction, treatment was terminated. In the case of hand–foot syndrome, the dose of 5-FU was to be reduced by 20%. Delayed diarrhoea was treated immediately with loperamide and rehydration and, if associated with severe neutropenia, a broad-spectrum antibiotic. Hospitalisation with i.v. rehydration was required for grade 4 or persistent (>48 h) diarrhoea, concomitant vomiting, fever, or KPS <60%.\n\nBODY.PATIENTS AND METHODS.STUDY EVALUATIONS:\nAt baseline up to five measurable lesions per organ and 10 lesions in total were to be identified as target lesions, measured using computed tomography (CT), and recorded according to the RECIST system (Response Evaluation Criteria In Solid Tumours; Therasse et al, 2000). The sum of the longest diameters for all target lesions was used as a reference for determining objective tumour response. Tumour responses were evaluated at week 7, week 14 and then every two cycles for patients receiving ILF or every four cycles for patients receiving ELF. Responses were determined according to RECIST as follows: complete response was defined as the disappearance of all target and nontarget lesions with no new lesions and confirmed by two observations at least 4 weeks apart; PR was defined as a reduction of 30% or more in the sums of the longest diameters of all measurable lesions relative to baseline with no new lesions; no change (NC) was defined as neither sufficient shrinkage to qualify for PR nor sufficient increase to qualify for progressive disease (PD) with no new lesions; and PD was defined as ⩾20% increase in the sum of the longest diameters, the occurrence of nontarget lesions (e.g. pleural effusion or ascites) or the appearance of brain metastases independently of performance at sites outside the brain. Safety and tolerability were assessed by regular clinical examinations and assessments of adverse events (weekly, at the end of treatment and at every 3 months of follow-up), disease symptoms, KPS, haematological and biochemical parameters.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nThe primary end point was objective clinical response (CR+PR) based on an interim analysis following 14 weeks of treatment. The one-sided exact Fisher's test was used to compare the treatment arms at the significance level α=5%. The analyses were performed on an intention-to-treat basis including all patients who were treated in the study. The secondary end points were ORR (for the entire treatment period), time to progression, tumour growth control, time to treatment failure (including progression, death or withdrawal) and survival. Time to event data were described by Kaplan–Meier estimates and treatment groups were compared by log-rank test. Time to event data were further evaluated by appropriate proportional Cox's models and results were summarised by hazard ratio point and 95% confidence interval (CI) estimates, and P-values of Wald χ2 test. Binary data were described by risk ratio (RR) point and 95% CI estimates and treatment groups were compared by exact Fisher's test. Binary data were further evaluated by appropriate logistic regression models and were summarised by odds ratio point and 95% CI estimates and P-values of Wald χ2 test. If not specified otherwise, P-values are presented from two-sided tests and two-sided 95% CI are presented. All analyses were performed using SAS version 6.12.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nIn all, 120 patients from 17 centres in Germany were randomised into the study between November 2000 and April 2003. Two patients from Arm A and four from Arm B withdrew without receiving study treatment; therefore, the intention-to-treat population contained 114 patients (56 received ILF and 58 received ELF). The baseline characteristics were well balanced between the two treatment groups (Table 1). The median age of patients was 62 years and the median KPS was 90%. In around one-third (31%) of patients, the primary tumour site was the oesophagogastric junction, 62% of patients had liver metastases and in 77% two or more organs were involved.\n\nBODY.RESULTS.RESPONSE RATES:\nThe objective clinical response rates following 14 weeks of treatment (primary end point) were 30% for ILF compared with 17% for ELF (RR 0.57, 95% CI 0.29–1.13, P=0.0766). The ORRs for the entire treatment period and including all assessments prior to discontinuation were 43% (24 of the 56) for ILF and 24% (14 of the 58) for ELF (Table 2). The increased response rate provided by ILF compared with ELF was statistically significant (RR=0.56; 95% CI=0.33–0.97; P=0.0467.) The tumour control rates (CR+PR+NC) were 63% (35 of the 56) and 41% (24 of the 58), respectively. Logistic regression analysis indicated that a baseline KPS ⩽80% reduced the likelihood of a response by 59% compared with patients whose KPS was greater than 80% (P=0.038) (Table 3). After adjustment for KPS, peritoneal involvement and surgery for primary tumour, the regression model also demonstrated that ILF was 138% more likely to provide a response when compared with the ELF regimen (Table 3, P=0.042).\n\nBODY.RESULTS.PROGRESSION-FREE SURVIVAL, TREATMENT FAILURE AND OVERALL SURVIVAL:\nAt the last data cutoff, the median follow-up was 9.4 months in Arm A and 5.8 months in Arm B. At this time, 96 of the 114 patients had died. Disease progression was the major cause of death and accounted for 79% of patients in both treatment groups. One patient from the ILF arm did not comply with the provided recommendations for the treatment of prolonged grade 3 diarrhoea and consequently died (i.e. toxic death). One patient in the ELF arm died from a cardiovascular event. Compared with ELF, the ILF regimen extended median progression-free survival, median time to treatment failure and median overall survival (Table 4). However, when the treatment groups were compared by log-rank test, there was no significant difference between the two treatments for any of these parameters (e.g. the Kaplan–Meier survival plot as shown in Figure 1). Investigational analyses found that the risk of progression was increased in patients with a primary tumour in the oesophagogastric junction and in those with metastatic involvement in two or more organs (Table 3). As would be expected, the risk of death was increased in patients with a low KPS, in those with two or more involved organs and in those with peritoneal involvement who received ELF (Table 3).\n\nBODY.RESULTS.SAFETY:\nThe median number of cycles administered in the study was two for ILF (Arm A) and three for ELF (Arm B) (Table 5). Although the median treatment duration period was over twice as long with ILF than with ELF, there were more dose administration delays (70%) and dose reductions (75%) with ILF than with ELF (52 and 45%, respectively). The main reason for discontinuing study treatment was disease progression; 54% of patients receiving ILF and 72% receiving ELF. Although only one patient in each treatment group withdrew because of treatment-related toxicity, five patients receiving ILF and three patients receiving ELF either withdrew consent or refused further treatment. The incidence of grade 3/4 haematological toxicities was low in both treatment groups with the exception of neutropenia, which was reported by 57% of patients receiving ELF (Table 6). There were more grade 3/4 gastrointestinal toxic events with ILF, notably diarrhoea, which was reported by 18% of patients compared with no reports with ELF. Grade 3/4 alopecia was reported by a significant proportion of patients receiving ELF (28%), but was only seen in 5% of those receiving ILF.\n\nBODY.DISCUSSION:\nAlthough chemotherapy regimens offer at best a slight, albeit statistically significant, improvement in survival for patients with gastric cancer, they are associated with a degree of toxicity that limits their value as a palliative treatment (Vanhoefer et al, 2000; Schoffski, 2002; Diaz-Rubio, 2004). The primary end point of clinical response at 14 weeks was selected so that a statistical comparison at a fixed time point could be made. However, as it is the convention in such studies, patients were treated until progression, and could respond to treatment at a later point. Therefore, the overall response and survival rates obtained from the entire dosing period provide a more clinically significant assessment of the efficacy of these regimens for discussion in relation to other trials in gastric cancer. The irinotecan-based combination provided again a greater ORR than that seen with the commonly used ELF regimen (43 vs 24%, respectively, P=0.0467). Overall response rates for ELF reported in previous studies range from 9 to 23% (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002) and this compares well with the 24% response rate reported in this study. Accordingly, an ORR of nearly 50% for ILF, as seen in this study, is a substantial improvement and is in the range of previous reports of the use of this drug combination in this setting (Blanke et al, 2001; Moehler et al, 2003). The overall survival data in the present study also compare well with those from previous studies. The median overall survival with ELF has been reported at 7.2 and 8.0 months (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002), which is similar to both the 8.3 months reported here and the data reported for irinotecan-based regimens in the second-line setting between 7.0 and 7.6 months (Moehler et al, 2003; Assersohn et al, 2004). By comparison, there was a nonsignificant trend for increased median survival with ILF in this study (10.8 months) and this compares well with data reported for more recent exploratory combinations such as capecitabine and docetaxel (10.5 months), and epirubicin, docetaxel and cisplatin (11.0 months) (Lee et al, 2004; Park et al, 2004). The same can be said of the progression-free survival period in the ILF group of 4.5 months, which compared well with the 4.1 and 5.2 months reported recently for docetaxel-based regimens (Lee et al, 2004; Park et al, 2004). In other randomised phase II studies, continuous 5-FU/LV infusion plus irinotecan has also provided promising efficacy (ORRs of 40–42%, median progression-free survival periods of 6.5–6.9 months and median overall survival periods of 10.7–11.3 months; Bouche et al, 2004; Pozzo et al, 2004). Consequently, large phase III studies are being considered to investigate irinotecan in combination with continuous 5-FU/LV infusion regimens. When patient histories, disease status and other factors were examined for their effects on clinical outcome, those patients who were in better general health (good performance status, low tumour burden) were more likely to achieve a response and less likely to have a progression event or die, regardless of the treatment arm to which they were randomised. Patients with peritoneal involvement at presentation have a generally poorer prognosis and as a group face a desperate need for improved treatment options (Rau et al, 1996). The data from this study demonstrated that these patients are less likely to suffer a fatal event if treated with ILF rather than ELF. This is potentially an important observation for the management of these difficult to treat patients (Blot et al, 1991; Bray et al, 2002). The extension of meaningful survival remains a major objective for oncologists who must therefore consider the impact of treatment-related toxicity. Overall, the occurrence of the toxicities in this study was consistent with the safety profiles of irinotecan, etoposide and 5-FU/LV. The ILF combination was well tolerated with a low and acceptable incidence of haematological toxicity. Gastrointestinal toxicity is a recognised side effect of ILF therapy (Douillard et al, 2000; Saltz et al, 2000), which can require hospitalisation and urgent medical intervention (Rothenberg et al, 2001). The incidence of grade 3 or 4 diarrhoea in the current study was comparable to the previous data in gastric cancer (Blanke et al, 2001; Moehler et al, 2003; Pozzo et al, 2004). With close monitoring of the patient, suitable medication and rehydration, most cases of diarrhoea can be managed effectively and do not present a significant obstacle to the clinical use of ILF. The toxicity observed in our study was lower than that reported by Douillard et al in the pivotal European first-line trial where patients with colorectal cancer received weekly irinotecan (80 mg m−2) plus an AIO-based regimen of 24-h high-dose 5-FU (2300 mg m−2) preceded by 2-h LV 500 mg m−2, and grade 3/4 diarrhoea was reported by 44% of patients (Douillard et al, 2000). The lower toxicity in our study might be due to the lower daily doses of 5-FU (2000 mg m−2 administered over 24 h). Work is ongoing to identify those patients who carry a specific genetic polymorphism in one of the main enzymes (UGT1A1) involved in the detoxification of irinotecan and are therefore more susceptible to the side effects of irinotecan (Mathijssen et al, 2001). Such work will improve the targeting of this useful therapy and may allow appropriate prescriptive dosing schedules on an individual basis. The present study concurs with similar phase II studies in that the combination of irinotecan with continuous LV/5-FU (ILF) represents a potentially valuable new treatment option for metastatic gastric cancer and requires further evaluation (Bouche et al, 2004; Pozzo et al, 2004).\n\n**Question:** Compared to Etoposide with 5-fluorouracil plus leucovorin (ELF) what was the result of Irinotecan with high-dose 5-fluorouracil plus leucovorin (ILF) on Progression-free survival?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
826
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Moving from standing to sitting without discomfort?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
832
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Doing housework?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
834
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Wearing clothes that show the legs?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
839
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to Ultrasound?guided foam sclerotherapy what was the result of Endovenous laser ablation on Bending the legs without discomfort, lifting heavy objects without discomfort, Moving from standing to sitting without discomfort, Standing still for a long time (>?15 min) without discomfort, Driving a car, Doing housework, Partial return to normal work/employment, Going out socially, Sporting activity or exercise?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
840
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to Ultrasound?guided foam sclerotherapy what was the result of Endovenous laser ablation on walking short distances without discomfort, walking long distances, looking after children, full return to normal work/employment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
863
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Statin efficacy in the treatment of hepatitis C genotype I\n\n ABSTRACT.BACKGROUND:\nLipid metabolism is one of the hepatitis C virus (HCV) life cycle steps. Statins can reduce cholesterol level and finally can decrease HCV replication. Thus, we assessed the effect of Statins in combination with standard antiviral treatment on hyperlipidemic genotype I HCV infected patients.\n\nABSTRACT.MATERIALS AND METHODS:\nThis study was a prospective clinical trial. 40 patients were selected from those referred to educational and Therapeutic Centers of Isfahan University of Medical Sciences from 2009 to 2010 with confirmed HCV viremia. All patients received Peg-interferon-a2a and ribavirin. 20 hyperlipidemic Patients received 20 mg atorvastatin nightly for 3 months and placebo was prescribed for 20 normolipidemic HCV infected patients as a control group. Liver enzymes and complete blood count were checked monthly and thyroid stimulating hormone was checked every 3 months. We also performed quantitative HCV-ribonucleic acid (RNA) test in 12th week of therapy, at the end of treatment and 6 months after therapy for all samples.\n\nABSTRACT.RESULTS:\nWe didn't find any significant differences in the mean of HCV-RNA numbers between statin and placebo groups in 12th week of treatment, in the end of treatment and 6 months after treatment (P > 0.05).\n\nABSTRACT.CONCLUSION:\nAtorvastatin has no effect on the mean of HCV viral load when we added it to standard treatment for hepatitis C infection. Further studies are necessary to examine the possible antiviral properties of statins and their potential role as adjuncts to standard HCV therapy.\n\nBODY.INTRODUCTION:\nHepatitis C virus (HCV) is an important human pathogen, not only because of its high prevalence and worldwide burden, but also because of the potentially serious complications of persistent HCV infection. These complications include cirrhosis, hepatocellular carcinoma, and end-stage liver disease necessitating liver transplantation. The incidence rates for all of these complications are expected to rise in the near future.[12] It is estimated that 130-170 million people have chronic HCV (about 3% of the world population).[34] More than 3,00,000 people die every year from hepatitis C related liver diseases.[5] There is no vaccination for HCV, so the treatment is very important. The standard medication of HCV (pegylated interferon (IFN) alpha plus ribavirin [RBV]) in HCV genotype I infected patients have shown 40-50% sustained virological response (SVR),[3] so we need more effective treatment method. Lipid metabolism is one of the HCV life cycle steps. HCV forms a replication complex in lipid raft membrane that is full of cholesterol and sphingolipids.[67] 3-hydroxy-3 methyl-glutaryl Coenzyme A inhibitors (statins) can reduce cholesterol level and finally can decrease HCV replication. Hence, statin therapy can be introduced as a complementary treatment. In vitro examinations have found that some statins, especially fluvastatin and atorvastatin can inhibit HCV replication,[8] although statins should be used with caution in advanced end-stage liver disease because of decompensation risk.[9] Recently, beneficial effect of statin use among patients with HCV-related liver disease has been suggested. In vitro studies show that high concentrations of statins disrupt HCV replication through depletion of isoprenoid geranylgeranyl pyrophosphate.[1011] Statins may thus have antiviral effects through mechanisms not related to lipid metabolism.[1213] The low-density lipoprotein (LDL) receptor and the high-density lipoprotein scavenger receptor B1 putatively facilitate HCV entry into hepatocytes. Complex host proteins are found to be closely associated with HCV nonstructural proteins. The process which links these host and HCV proteins is termed prenylation. Statin agents which block the formation of the lipid precursors for prenylation, could theoretically interfere with viral replication.[141516] Some human studies have done for assessing the effect of statins in hepatitis c treatment, but their results are different. O'Leary et al. have shown no reduction in HCV-ribonucleic acid (RNA) titers with 20 mg atorvastatin.[17] However, another study indicates non-dose-dependent discretion in HCV-RNA in %50 of patients,[18] the purpose of this study was to determine the effect of statin therapy on standard antiviral treatment response in hyperlipidemic genotype I HCV patients.\n\nBODY.MATERIALS AND METHODS.SELECTION OF PATIENT GROUPS:\nOur study was a prospective clinical trial which was done on genotype I HCV patients who had been referred to educational and therapeutic centers of Isfahan University of Medical Sciences between years 2009 and 2010. All patients were tested to confirm HCV viremia by Cobas Amplicor HCV Monitor test, version 2 (Roche Diagnostics, Mannheim, Germany). We selected our patients using simple random sampling. Subjects in the statin group were hyperlipidemic patients according to WHO definition for hyperlipidemia which is LDL cholesterol level more than 160 mg/dl, 130 mg/dl for patient with one risk factor and 100 mg/dl for patients with diabetes mellitus, 2 or more than risk factors. Patients in the placebo group were selected from HCV patients without hyperlipidemia. This study protocol was approved by Institutional Review Board and Ethics Committee of Isfahan University of Medical Sciences (research project number: 309079). Patients were included in the study if they were naïve for HCV treatment, more than 18 years old and patients who did not have any clinical sign for decompensate Cirrhosis. All cases who were affected by other genotypes of HCV and chronic liver disease such as hepatitis B virus, HIV, Wilson disease, hemochromathosis, autoimmune liver disease, primary biliary cirrhosis, secondary biliary cirrhosis, obstructive liver disease, viral infections that affect liver function, and who had the history of alcohol drinking and drugs related steatosis and hepatotoxicity were excluded from our study. Patients were excluded from the study if the level of their liver enzymes (alanine transaminase and aspartate aminotransferase) had risen >3 times during statin treatment. Patients were examined clinically before antiviral treatment. Fasting blood sugar, plasma lipids, liver enzymes tests and liver and abdomen sonography (Hitachi, Tokyo Japan) were done 2 times (with 1 month apart) before treatment for all of them. We also checked their blood pressure, waist size, and body mass index (BMI) according to related guideline.[19] All patients weekly received 180 μg Peg-IFN-a2a and 1000, 1200 and 1400 mg RBV for weights <75 kg, 75-100 kg, and more than 100 kg, respectively (Pegasys Co.). Statin group received 20 mg atorvastatin (Farabi Co., Tehran, Iran) nightly for 3 months and placebo was prescribed for the placebo group the same as atorvastatin in the statin group. Liver enzymes and complete blood count were checked monthly and thyroid stimulating hormone checked every 3 months. We also did quantitative HCV-RNA test to reveal patients with undetectable test in 12th week of therapy (early virological response [EVR]), after therapy (end of treatment virological response) and 6 months after therapy SVR for all samples. We analyzed our data using SPSS version 16 (Inc., Chicago, IL, USA) and considered P < 0.05 as valuable, Chi-square was applied for comparison of categorical parameters and Student's t-test for comparing the continuous parameters.\n\nBODY.RESULT:\nOur study included 40 HCV positive patients (20 hyperlipidemic and 20 normolipidemic patients). Our results showed that there are significant differences in the mean of BMI, triglyceride, LDL, and cholesterol between statin and placebo groups [Table 1]. Table 1 demographic and lipid character in statin and placebo group The evaluation of viral load before statin therapy did not show a significant difference between statin and placebo, also, we didn't find any significant differences in the mean of HCV-RNA numbers in statin and placebo groups in 12th week of Statins therapy (P > 0.05), at the end of treatment (P > 0.05) and 6 months after treatment (P > 0.05) [Table 2]. Our findings also implied that EVR in statin and placebo were 75 and 70% and SVR was 95% in both groups. ALT level was higher in statin group before treatment [Table 3]. Although both AST and ALT level rose after treatment, we did not find any difference between AST and ALT level after treatment. Table 2 Comparison between viral load before treatment and after start of treatment between statin and placebo groups Table 3 comparison between transaminases in statin and placebo group before and after treatment \n\nBODY.DISCUSSION:\nIn the current study, we didn't find any significant differences in the mean of viral load of hepatitis c in statin and placebo groups in 12th week of statin therapy, at the end of treatment and 6 months after treatment. Our findings also implied that early response to treatment (EVR) in statin and placebo were 70 and 75% and Sustain response to treatment (SVR) in statin and placebo was 95% in both groups. A pilot study of 31 HCV-infected veterans who were given fluvastatin 20-320 mg/day for 2-12 weeks with weekly monitoring of HCV-RNA and liver tests reported modest reductions of viral load.[20] Furthermore, a pharmacoepidemiologic study found that the use of lovastatin was associated with a 40-50% lower incidence of moderate as well as severe liver injury among patients with preexisting liver disease.[21] Even though, prior individual studies examined important aspects of the association between statin and lowering the severity if liver disease in HCV-infected patients, these studies either did not adjust for histological severity of liver disease, had generally short follow-up, or did not use placebo subjects. Clearly, more information is needed about the possible beneficial effect of statins in HCV-infected.[22] The result of other in vivo studies has been same as our findings in this study. A study which has done by O'Leary et al. (2007) on 10 samples show that atorvastatin does not inhibit HCV-RNA replication in vivo.[17] Forde et al. in a cross-sectional study (2009) confirmed that statins don't have any apparent effect on HCV-RNA replication in vivo.[23] Unlike in vivo studies the result of in vitro research has shown that statins can decrease HCV-RNA replication.[3] Ikeda et al. evaluated the effect of statins on HCV-RNA replication in OR6 cells.[24] Fluvastatin exhibited the strongest anti-HCV activity while atorvastatin and simvastatin showed moderate inhibitory effects. The combination of IFN and the statins exhibited strong inhibitory effects on HCV-RNA replication.[2526] There are some reasons for differences between in vitro and in vivo studies. As we told before various factors affect on antiviral treatment response in the human body such as obesity, insulin resistance and liver steatosis[1516] so we need to do more studies to examine the effect of statins on HCV in relation to confounder factors. On the other hand, statins concentration in cell cultures is higher than their concentration in plasma. Studies have shown that serum levels of such agents after prolonged therapy are significantly lower than the statin concentrations those were used in replicon systems.[27] In our study, response rate of patients were higher than western studies, there are some similar reports that show high response rate of combination therapy in Iranian patients, it may be due to lower viral load in comparison to western patients.[2829] In summary in our study statins did not have any significant effect on HCV virus number and virologic response. Our limitation in this study was few sample size, and further studies are necessary to examine the possible antiviral properties of statins and their potential role as adjuncts to standard HCV therapy.\n\nBODY.AUTHORS’ CONTRIBUTION:\nAll authors have contributed in designing and conducting the study. Ash, AB, SS, SSh, and Nkh collected the data and MM, MKh, MT, Ash, and BA did the analysis. All authors have assisted in preparation of the first draft of the manuscript or revising it critically for important intellectual content. All authors have read and approved the content of the manuscript and are accountable for all aspects of the work.\n\n**Question:** Compared to placebo what was the result of 20 mg atorvastatin nightly for 3 months on mean of HCV-RNA numbers at the end of treatment ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
870
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Short-term effects of physiotherapy combining repetitive facilitation\nexercises and orthotic treatment in chronic post-stroke patients\n\n ABSTRACT:\n[Purpose] This study investigated the short-term effects of a combination therapy\nconsisting of repetitive facilitative exercises and orthotic treatment. [Subjects and\nMethods] The subjects were chronic post-stroke patients (n=27; 24 males and 3 females;\n59.3 ± 12.4 years old; duration after onset: 35.7 ± 28.9 months) with limited mobility and\nmotor function. Each subject received combination therapy consisting of repetitive\nfacilitative exercises for the hemiplegic lower limb and gait training with an ankle-foot\northosis for 4 weeks. The Fugl-Meyer assessment of the lower extremity, the Stroke\nImpairment Assessment Set as a measure of motor performance, the Timed Up & Go test,\nand the 10-m walk test as a measure of functional ambulation were evaluated before and\nafter the combination therapy intervention. [Results] The findings of the Fugl-Meyer\nassessment, Stroke Impairment Assessment Set, Timed Up & Go test, and 10-m walk test\nsignificantly improved after the intervention. Moreover, the results of the 10-m walk test\nat a fast speed reached the minimal detectible change threshold (0.13 m/s). [Conclusion]\nShort-term physiotherapy combining repetitive facilitative exercises and orthotic\ntreatment may be more effective than the conventional neurofacilitation therapy, to\nimprove the lower-limb motor performance and functional ambulation of chronic post-stroke\npatients.\n\nBODY.INTRODUCTION:\nThe mobility of many stroke survivors is limited, and most identify walking as a top\npriority for rehabilitation1). One way to\nmanage ambulatory difficulties is with an ankle-foot orthosis (AFO) or a foot-drop splint,\nwhich aims to stabilize the foot and ankle while weight-bearing and lift the toes while\nstepping1). In stroke rehabilitation,\nvarious approaches, including robotic assistance, strength training, and\ntask-related/virtual reality techniques, have been shown to improve motor function2). The benefits of a high intensity stroke\nrehabilitation program are well established, and although no clear guidelines exist\nregarding the best levels of intensity in practice, the need for its incorporation into a\ntherapy program is widely acknowledged2). Repetitive facilitative exercises (RFE), which combine a high repetition rate and\nneurofacilitation, are a recently developed approach to rehabilitation of stroke-related\nlimb impairment2,3,4,5). In the RFE program, therapists use muscle spindle stretching and\nskin-generated reflexes to assist the patient's efforts to move an affected joint5). Previous studies have shown that an RFE\nprogram improved lower-limb motor performance (Brunnstrom Recovery Stage, foot tapping, and\nlower-limb strength) and the 10-m walk test in patients with brain damage3). An AFO is an assistive device to help stroke patients with hemiplegia walk and stand. A\nproperly prescribed AFO can improve gait performance and control abnormal kinematics arising\nfrom coordination deficits6). Gait training\nwith an AFO has been also reported to improve gait speed and balance in post-stroke\npatients7, 8). Therefore, we hypothesized that short-term physiotherapy combining RFE and orthotic\ntreatment would improve both lower-extremity motor performance and functional ambulation.\nThe present study aimed to confirm the efficacy of a combination therapy consisting of RFE\nfor the hemiplegic lower limb and gait training with AFO.\n\nBODY.SUBJECTS AND METHODS:\nThe subjects consisted of 27 inpatients (24 males and 3 females) diagnosed with cerebral\nhemorrhage (15 patients) or cerebral infarction (12 patients). The patients' average age was\n59.3 ± 12.4 years (33–73 years), the duration after onset was 35.7 ± 28.9 months (5–115\nmonths), and Brunnstrom Stage medians and quartiles of the hemiplegic lower limb were Stage\n4.0 and 4–4.5 (stage 3–6), respectively. Eleven patients had right hemiplegia, and 16\npatients had left hemiplegia. Among the 27 study participants, one used a rigid AFO with a\nmedial stainless steel upright9), two used\na posterior spring leaf10), and 24 used a\nhinged AFO10). The inclusion criteria were as follows: age, 30–80 years; hemiplegia of the lower limb\n(Brunnstrom stage 3–6); ability to walk without assistance using a T-cane and/or AFO;\ndiagnosis of hemiplegia due to stroke; morbidity period, 5 months or more; ability to\nunderstand the purpose of the study and follow instructions, and agreement to participate in\nthis study. The exclusion criteria were as follows: onset of stroke, <4 weeks previously;\nabnormal gait prior to the onset of stroke (such as joint disability or peripheral\nneuropathy); any medical condition that limited the study design (such as severe\ncardiopulmonary disease or severe sensory disturbance); severe aphasia and dementia that\nmade it impossible to follow verbal instructions; and lesions on both sides of the cerebral\nhemisphere. The procedures complied with the 1975 Declaration of Helsinki, as revised in 2013. The\nstudy was conducted after obtaining approval from the ethics committee of the Tarumizu Chuo\nHospital, and all participants provided written informed consent. The subjects were enrolled in a before-after study. Intervention was combination therapy\nconsisting of RFE for the hemiplegic lower limb and gait training with AFO. According to a\nprevious study, all subjects underwent an RFE program consisting of 7 specific exercise\npatterns3), which were used to elicit\nmovement of the hip, knee, and ankle in a manner designed to minimize synergistic movements.\nThis technique involved the use of rapid passive stretching of the muscles in conjunction\nwith tapping and rubbing the skin to assist in generating contractions of the targeted\nmuscles5). Exercises were performed as\ntwo sets of 50 repetitions with a 1–2 minute rest period in between sets5). In addition, all patients underwent gait\ntraining with a self-made AFO. This intervention was performed 40 minutes/day, 6 days/week for 4 weeks. Outcomes were\nmeasured before intervention and after 4 weeks of intervention. The outcome measures used to\nassess motor performance were the Fugl-Meyer Assessment of the lower extremity (FMA-LE)11) and the Stroke Impairment Assessment Set\n(SIAS)12). Functional ambulation was\nassessed with a Timed Up & Go Test (TUG)13) and a 10-m walk test (10MWT). To determine whether physiotherapy that combined RFE and orthotic treatment improved the\nlower-limb motor performance and functional ambulation, the Wilcoxon Signed-Rank Test was\nperformed, because the Shapiro-Wilk's test showed that the data were not normally\ndistributed. The analysis was performed with the statistical analysis program SPSS\nStatistics for Windows version 22.0 (IBM Corporation, Armonk, NY, USA) with a significance\nlevel of α=0.05.\n\nBODY.RESULTS:\nTable 1Table 1.Lower-limb motor performance and functional ambulation at the baseline and after\nthe combining trainingOutcome measurementsBaselineAfter the trainingDifference mean ± SDFMA-LE22.96 ± 4.0725.85 ± 4.032.89 ± 2.99**SIAS46.59 ± 8.3453.63 ± 7.637.04 ± 4.60**TUG (sec)17.35 ± 5.5714.02 ± 4.46−3.33 ± 3.52**10MWTCGS (m/sec)0.68 ± 0.220.81 ± 0.240.12 ± 0.09**FGS (m/sec)0.80 ± 0.280.96 ± 0.310.16 ± 0.16****Significant difference p<0.01. SD: standard deviation; FMA-LE: the Fugl-Meyer\nAssessment of the lower extremity; SIAS: Stroke Impairment Assessment Set; TUG: Timed\n\"Up & Go\" test; 10MWT: 10-m walk test; CGS: comfortable gait speed; FGS: fast gait\nspeed shows the changes in FMA-LE, SIAS, TUG, and 10MWT (comfortable gait speed and\nfast gait speed). In terms of lower-limb motor performance, FMA-LE increased significantly\nfrom 22.96 ± 4.07 to 25.85 ± 4.03 (p<0.01), and SIAS increased significantly from 46.59 ±\n8.34 to 53.63 ± 7.63 (p<0.01). In terms of functional ambulation, TUG decreased\nsignificantly from 17.35 ± 5.57 seconds to 14.02 ± 4.46 seconds (p<0.01), comfortable\ngait speed increased significantly from 0.68 ± 0.22 (m/sec) to 0.81 ± 0.24 (m/sec)\n(p<0.01), and fast gait speed increased significantly from 0.80 ± 0.28 (m/sec) to 0.96 ±\n0.31 (m/sec) (p<0.01).\n\nBODY.DISCUSSION:\nIn this study, short-term combination therapy consisting of RFE and orthotic treatment was\nconducted to improve the lower-limb motor performance and functional ambulation of chronic\npost-stroke patients. There were statistically significant improvements in FMA-LE, SIAS,\nTUG, and 10MWT after the intervention. Furthermore, the results of the 10MWT at a fast gait\nspeed reached the minimal detectible change threshold (0.13 m/s)14). Recently, some systemic reviews of AFO have reported that gait training with AFO can\nimprove walking ability and balance in people with stroke1, 15). However, few studies\nhave focused on the correlation between AFO and motor performance changes of the lower limb\nin post-stroke patients16,17,18). Changes seen in\nthis study were more marked in the lower-limb motor performance, with a substantial\nclinically meaningful change in fast walking speed (0.13 m/s) being achieved by all\nparticipants who completed the study protocol. The results of this study show that\nshort-term combination therapy consisting of RFE and gait training with AFO may enhance\nlower-limb motor function, thereby improving walking ability in patients with chronic\nstroke, which is beneficial for comprehensive stroke treatment. Several studies suggested that RFE might promote functional recovery of hemiplegia and\nactivities of daily living to a greater extent than conventional neurofacilitation therapy,\nusing a randomized controlled design2,3,4,5, 19, 20). Especially, RFE with other interventions\n(i.e., neuromuscular electrical stimulation, direct application of vibratory stimulation\nrepetitive transcranial magnetic stimulation, and pharmacological treatments) may be more\neffective than RFE only for the recovery of limb motor performance. In the present study,\npatients who received RFE with gait training with AFO showed significant functional and\nambulatory improvements. This study had some limitations because it involved a small number of subjects, and its\nintervention period of four weeks was short. In addition, this was not a randomized\ncontrolled trial, and the efficacy of RFE with orthotic treatment could not be compared;\ntherefore, it is difficult to generalize its results. Further, this study could not exclude\nobserver bias and subject bias because the same staff implemented assessment and\ntraining. In conclusion, the differences in FMA-LE, SIAS, TUG, and 10MWT demonstrated that\napplication of a short-term combination therapy consisting of RFE and orthotic treatment has\nbeneficial therapeutic effects on improving functional ambulation and motor performance of\nthe lower limb in chronic post-stroke patients.\n\nBODY.DISCUSSION.CONFLICT OF INTEREST:\nThe authors have no conflicts of interest to declare.\n\n**Question:** Compared to before repetitive facilitative exercises and orthotic treatment. what was the result of after 4 weeks of repetitive facilitative exercises and orthotic treatment. on comfortable gait speed and fast gait speed ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
872
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Expanding access to parasite-based malaria diagnosis through retail drug shops in Tanzania: evidence from a randomized trial and implications for treatment\n\n ABSTRACT.BACKGROUND:\nTanzania has seen a reduction in the fraction of fevers caused by malaria, likely due in part to scale-up of control measures. While national guidelines require parasite-based diagnosis prior to treatment, it is estimated that more than half of suspected malaria treatment-seeking in Tanzania initiates in the private retail sector, where diagnosis by malaria rapid diagnostic test (RDT) or microscopy is illegal. This pilot study investigated whether the introduction of RDTs into Accredited Drug Dispensing Outlets (ADDOs) under realistic market conditions would improve case management practices.\n\nABSTRACT.METHODS:\nDispensers from ADDOs in two intervention districts in Tanzania were trained to stock and perform RDTs and monitored quarterly. Each district was assigned a different recommended retail price to evaluate the need for a subsidy. Malaria RDT and artemisinin-based combination therapy (ACT) uptake and availability were measured pre-intervention and 1 year post-intervention through structured surveys of ADDO owners and exiting customers in both intervention districts and one contiguous control district. Descriptive analysis and logistic regression were used to compare the three districts and identify predictive variables for testing.\n\nABSTRACT.RESULTS AND DISCUSSION:\nA total of 310 dispensers from 262 ADDOs were trained to stock and perform RDTs. RDT availability in intervention ADDOs increased from 1% (n = 172) to 73% (n = 163) during the study; ACT medicines were available in 75% of 260 pre-intervention and 68% of 254 post-intervention ADDOs. Pre-treatment testing performed within the ADDO increased from 0 to 65% of suspected malaria patients who visited a shop (95% CI 60.8–69.6%) with no difference between intervention districts. Overall parasite-based diagnosis increased from 19 to 74% in intervention districts and from 3 to 18% in the control district. Prior knowledge of RDT availability (aOR = 1.9, p = 0.03) and RDT experience (aOR = 1.9, p = 0.01) were predictors for testing. Adherence data indicated that 75% of malaria positives received ACT, while 3% of negatives received ACT.\n\nABSTRACT.CONCLUSIONS:\nTrained and supervised ADDO dispensers in rural Tanzania performed and sold RDTs under real market conditions to two-thirds of suspected malaria patients during this one-year pilot. These results support the hypothesis that introducing RDTs into regulated private retail sector settings can improve malaria testing and treatment practices without an RDT subsidy. \nTrial registration ISRCTN ISRCTN14115509\n\nBODY.BACKGROUND:\nMalaria prevalence among children under 5 years old in Tanzania declined from 18% in 2007 to 9% in 2011, whereas 2-week fever prevalence remained approximately 20% over the same time period, from 19 to 20% [1, 2]. This reduction in the proportion of fevers caused by malaria is likely due at least in part to Tanzania's recent scale-up of prevention and treatment measures [3–5]. Presumptive treatment of fevers with artemisinin-based combination therapy (ACT) thus will increasingly result in incorrect malaria diagnoses, prescription and wastage of inappropriate medications, and subsequent delays in obtaining effective treatment for the true cause of illness [6–9]. The Tanzania National Malaria Control Programme (NMCP) case management guidelines require suspected malaria cases to receive parasite-based diagnosis by microscopy or rapid diagnostic test (RDT) prior to treatment with anti-malarial drugs [10, 11]. In practice, however, up to 54% of those seeking treatment for suspected malaria in Tanzania first visit the private retail sector, where malaria diagnostics are currently prohibited from being sold and administered [12]. As a result, parasite-based testing is not received by many suspected malaria patients. The Tanzanian private retail sector includes both unregistered outlets and a network of more than 6000 registered shops or Accredited Drug Dispensing Outlets (ADDOs) (duka la dawa muhimu or DLDM in Kiswahili) regulated by the Pharmacy Council. Unlike unregistered outlets, ADDOs are permitted to stock and sell both over-the-counter medicines and certain classes of prescription medications, including ACT [13–15]. To obtain a Pharmacy Council permit each year, ADDOs owners must meet certain conditions related to the premises, training and certification of a dispenser who may or may not be the owner, and the products stocked. Like all other private retail outlets, RDTs may not be sold or performed at ADDOs, however. Malaria RDTs have been safely administered by non-medical personnel in several previous settings [16–18]. The introduction of RDTs into ADDOs has the potential to improve fever case management by increasing availability, access, and use of parasite-based diagnosis. It is unclear, however, whether customers will be willing to pay the extra cost for diagnosis, whether ADDO owners would encourage RDT use, whether RDTs might require subsidization to encourage uptake in the private sector, and whether treatment choices would adhere to test results. To investigate these questions, this pilot evaluated the operational feasibility of selling RDTs through ADDOs and measured changes in suspected malaria patient case management that occurred as a result of making RDTs available at two different prices.\n\nBODY.METHODS.STUDY AREA AND POPULATION:\nThis pilot was conceived and designed in partnership with the NMCP and Pharmacy Council, with a primary objective of informing national policymaking on legalizing the stocking and performance of RDTs through ADDOs. The study was conducted in three districts in Morogoro Region: Kilombero, Kilosa, and Mvomero. These three districts were selected due to their high density of ADDOs, moderate Plasmodium falciparum prevalence compared with national data (13% in Morogoro Region [2]), and convenient proximity to Dar es Salaam. The three districts are mostly rural, with a total estimated population of 1.7 million people in 2012 [19]. Peak malaria incidence corresponds to the two rainy seasons, one between March and May and the other between September and December. Kilombero and Kilosa were assigned via random number generator as the two intervention areas where ADDO dispensers were trained to stock, sell, and administer RDTs, and Mvomero served as the control area (Fig. 1).Fig. 1Map of the study area\n\n\nBODY.METHODS.INTERVENTION DESIGN:\nAll licensed ADDO dispensers currently working in an ADDO in the two intervention areas were invited to participate in the study through the District Malaria Focal Person, who is responsible for overseeing local malaria-related activities under the leadership of the District Medical Officer. In April–May 2013, six two-day trainings were held in each intervention district, each led by a national-level trainer and the district malaria focal person. Trainings covered signs and symptoms of uncomplicated and severe malaria, stocking, use, and disposal of RDTs, and appropriate case management based on RDT results. Dispensers were instructed to prescribe Tanzania's first-line treatment, artemether + lumefantrine (commonly referred to as ALu in Tanzania), to test-positive customers based on an ALu dosing reference chart provided to the dispensers. Dispensers were trained to refer customers with signs and symptoms of severe illness, suspected malaria patients who tested negative for malaria, and suspected malaria patients whose illness did not improve within 48 h to the nearest public health facility along with the results of their malaria test. Upon passing both practical and written evaluations, certified RDT dispensers were provided with an ID badge granting permission to perform RDTs in their ADDO, a sharps box, an RDT performance and case management job aid, and a weather-proof storefront sign advertising that malaria testing was available. Permission was also granted for the ADDO dispenser to prescribe an ALu to all patients who tested positive for malaria. To mimic real market conditions, existing supply chains were utilized: ADDOs in the two districts were instructed to purchase ParaHIT® Ag Pf RDTs from seven wholesalers. The wholesalers, in turn, were instructed to purchase RDTs from an importer in Dar es Salaam, who purchased the RDTs directly from the manufacturer for a fixed price pre-negotiated by the research team. Importers, wholesalers and ADDOs all agreed to fixed mark ups. ADDOs in Kilosa were asked to sell RDTs for a recommended retail price (RRP) of 1100 Tanzanian Shillings (USD $0.67 in May 2013) each, based on willingness-to-pay responses from pre-intervention exit interviews, analysis of markups in analogous commodities, and price negotiation. In Kilombero, ADDOs were asked to sell RDTs for 50% less or 500 Tanzanian Shillings (USD $0.32 in May 2013) [20]. To enable the ADDOs in Kilombero to charge the lower price, the research team asked the importer to sell RDTs to wholesalers in Kilombero at a 50% discount. On a monthly basis, the importer received financial compensation from the research team equal to the total discount provided to wholesalers located in Kilombero. The research team monitored stock levels at the importer level to prevent stock outs. Gloves were included in the RDT boxes. A unique blue-and-white \"mRDT\" checkmark logo was placed on ParaHIT® boxes, ID badges, job aids, and storefront signs in order to build brand recognition. Trained study staff provided supportive supervision during quarterly monitoring visits. At each visit, the certified dispenser at each participating ADDO was observed performing an RDT on a patient and evaluated according to a 17-point checklist adapted from the WHO \"Checklist for direct observation of health workers performing Rapid Diagnostic Tests (RDT) for malaria\" [21]. The supervisors also reported on stocking safety, shop hygiene, and waste disposal practices [22].\n\nBODY.METHODS.STUDY DESIGN:\nThe impact of the intervention was assessed by comparing the change in availability and use of RDTs at ADDOs in the intervention districts with those in the control district, Mvomero, where ADDOs were not given access to the RDTs, training, or supervision.\n\nBODY.METHODS.STUDY DESIGN.OUTLET SURVEYS:\nRDT and ACT availability were measured through two cross-sectional ADDO surveys, one conducted prior to the dispenser training in March 2013 and the second a year after the 2013 training, in May 2014. All ADDOs were eligible for selection for the pre-intervention survey, while only ADDOs with certified dispensers were eligible to participate in the post-intervention survey of the intervention districts. Ninety-one ADDOs per district were selected using random number generation based on sample-size calculations to ensure 80% power. The sample size is assumed to be sufficient to detect a five percentage-point difference in ADDO RDT availability. A Kiswahili or English-version structured questionnaire was used during the face-to-face interviews with ADDO dispensers depending on the language preference of the respondent. The primary outcome was RDT availability, defined as the proportion of ADDOs with RDTs in stock on the day of the survey and a trained, certified dispenser present to administer the test. A secondary outcome was ACT availability, defined as the proportion of ADDOs that reported having ACT medicines in stock for the 30 days prior to the survey.\n\nBODY.METHODS.STUDY DESIGN.CUSTOMER EXIT INTERVIEWS:\nFace-to-face exit interviews were conducted prior to the dispenser training (March 2013) and a year after the intervention began (May 2014) using a structured questionnaire in Kiswahili. Customers eligible for the exit interview were at least 18 years old and either seeking treatment in the ADDO for fever or suspected malaria or attempting to purchase an anti-malarial for themselves or someone else (the \"patient\"). A sample size of 400 eligible customers per district was estimated based on two-tailed sample size calculations designed to detect a 5%-point difference in RDT availability or 80% power. Pre-intervention, 1–3 customers were interviewed at each sampled ADDO. The sample size was halved for the pre-intervention survey based on very low predicted availability of RDTs and a primary goal of comparing similarity between districts. Post-intervention, enumerators attempted to survey all eligible customers during one full day at the ADDO in order to self-weight traffic variation between shops. If the customer and the patient were not the same person, demographic data were collected on both the customer and the patient, and illness data were collected about the patient. Customers were asked whether patients (either themselves or others) had received diagnostic testing at the ADDO, had been previously tested elsewhere, how much they had paid for RDTs at the ADDO, and whether they had purchased anti-malarial treatment.\n\nBODY.METHODS.STUDY DESIGN.STATISTICAL ANALYSIS:\nChi square tests were used to compare differences before and after intervention and between intervention and control districts in (a) the proportion of patients seeking treatment at ADDOs who received a parasite-based diagnostic test, (b) the median price paid for an RDT in an ADDO, (c) the proportion of patients who received a parasite-based diagnosis, (d) adherence to test results in terms of receiving ACT when testing positive and not receiving one when testing negative, and (e) the proportion of test-negative patients who received an antibiotic. Logistic regression models were then used to identify factors associated with receiving an RDT in an ADDO or receiving a parasite-based diagnosis anywhere, with shop included as a random effect to account for repeated sampling of visits to the same ADDOs. Tested covariates included study district and survey, demographic variables (customer and patient gender, age, completed education, and wealth quintile) as well as RDT knowledge and practice indicators regarding previous experience and attitudes about testing. Wealth quintiles were derived from a wealth index based on a set of household asset questions estimated using principal components analysis using similar methodology to Demographic and Health Surveys [23]. Covariates with p < 0.2 in bivariable models were included in multivariable models. Logistic regression models were also used to compare odds of treatment outcomes controlling for study district, survey, parasite-based diagnosis, and test result.\n\nBODY.RESULTS:\nNinety-five percent of ADDO dispensers passed the 2-day training. In total, 310 dispensers from 262 ADDOs, 164 (147 ADDOs) from Kilombero in the lower-priced or \"subsidized\" district and 146 (115 ADDOs) from Kilosa the higher priced or \"unsubsidized\" district, were certified and allowed to sell and perform RDTs. The number of outlet surveys and exit interviews completed are presented in Table 1.Table 1Total shop and exit interviews conducted and included in analysisDistrictPre-intervention (March 2013)Post-intervention (May 2014)ADDOs surveyedADDOs with ACT in Stocka\nTotal Exit InterviewsInterviews where patient present and ACT in Stockb\nADDOs surveyedADDOs with ACT in Stocka\nTotal Exit InterviewsInterviews where patient present and ACT in Stockb\nSubsidized (Kilombero)875991547652266217Unsubsidized (Kilosa)856682468763330247Control (Mvomero)886986599157359244Total260194259159254172955708\na'ACT in stock' defined as no stock-outs in the past 30 days\nbOnly customer interviews from shops with ACT medicines in stock and the patient present were included in analysis (i.e., the patient had the opportunity to be tested and purchase an ACT medicine)\n\n\nBODY.RESULTS.DIAGNOSTIC AND TREATMENT AVAILABILITY IN ADDOS:\nPre-intervention outlet surveys found that RDTs were not available in any surveyed ADDOs. Post-intervention, 73% of ADDOs in intervention districts had RDTs in stock and an RDT-certified dispenser present. The proportion of ADDOs with a certified dispenser and RDTs available was higher in the \"unsubsidized\" district Kilosa (85%, 95% CI 78–92%) than the \"subsidized\" district Kilombero (64%, 95% CI 55–72%, p < 0.001). Across the study districts, the proportion of ADDOs with ACTs available was 75% pre-intervention and 68% post-intervention. ACT availability did not differ significantly between districts or pre- or post-intervention (p > 0.05 for all).\n\nBODY.RESULTS.CUSTOMER AND PATIENT CHARACTERISTICS:\nA total of 1214 customers seeking treatment for suspected malaria, either for themselves or someone else, were interviewed: 259 pre-intervention and 955 post-intervention (Table 1). Demographic characteristics of customers were generally similar between districts during each survey (Table 2). The proportion of exit interviews in which the patient was present at the ADDO was significantly higher in intervention districts post-intervention. ADDOs were the first place treatment was sought for 75% of patient visits across surveys and districts (Table 3).Table 2Characteristics of anti-malarial customersPre-interventionPost-interventionSubsidized (n = 91)Unsubsidized (n = 82)Control (n = 86)\nχ\n2 p valueSubsidized (n = 266)Unsubsidized (n = 330)Control (n = 359)\nχ\n2 p valueCustomer age (median, range)30 (18–67)34 (18–68)33 (18–83)30 (6–79)32 (12–92)30 (14–82)Male (%, 95% CI)44.0 (33.7–54.3)47.6 (36.6–58.5)52.3 (41.7–63.0)0.53647.4 (41.3–53.4)49.4 (44.0–54.8)51.5 (46.3–56.7)0.585Completed > Primary school (%, 95% CI)15.4 (7.9–22.9)24.4 (15.0–33.8)20.9 (12.2–29.6)0.32721.1 (16.1–26.0)13.9 (10.2–17.7)21.2 (16.9–25.4)0.025Wealth quintile (%, 95% CI) Lowest14.3 (7.0–21.5)30.5 (20.4–40.6)17.4 (9.3–25.5)0.33717.7 (13.1–22.3)18.2 (14.1–22.4)24.8 (20.3–29.3)0.248 Lower20.9 (12.4–29.3)18.3 (9.8–26.8)20.9 (12.2–29.6)21.8 (16.8–26.8)21.9 (17.4–26.4)18.1 (14.1–22.1) Middle23.1 (14.3–31.8)13.4 (6.0–20.9)22.1 (13.2–31.0)22.9 (17.9–28.0)18.2 (14.1–22.4)20.6 (16.4–24.8) Higher20.9 (12.4–29.3) 20.7 (11.9–29.6)20.9 (12.2–29.6) 20.3 (15.5–25.1)21.3 (16.8–25.7)17.5 (13.6–21.5) Highest20.9 (12.4–29.3)17.1 (8.8–25.3)18.6 (10.3–26.9)17.3 (12.7–21.9)20.4 (16.0–24.7)18.9 (14.9–23.0)Patient is customer or is with customer (present at ADDO) (%, 95% CI)59.3 (49.1–69.5)59.1 (45.2–67.0)68.6 (58.7–78.5)0.22181.6 (76.9–86.3)74.8 (70.2–79.5)68.0 (63.1–72.8)0.001\nTable 3Characteristics of patients present at the ADDOPre-interventionPost-interventionSubsidized (n = 54)Unsubsidized (n = 46)Control (n = 59)\nχ\n2 p valueSubsidized (n = 217)Unsubsidized (n = 247)Control (n = 244)\nχ\n2 p valuePatient age (median, range)24.5 (1–66)22 (<1–62)27 (<1–83)23 (<1–79)22 (1–92)24 (1–74)Male (%, 95% CI)48.1 (34.6–61.7)47.8 (33.1–62.5)50.8 (37.8–63.8)0.94147.9 (41.2–54.6)53.0 (46.8–59.3)50.8 (44.5–57.1)0.546Days Ill (median, range)3 (<1, 14)3 (<1, 30)3 (<1, 14)2 (<1, 60)3 (<1, 30)2 (<1, 60)ADDO first place treatment sought (%, 95% CI)70.4 (58.0–82.8)65.2 (51.2–79.2)80.0 (69.2–90.1)0.24077.4 (71.8–83.0)74.8 (69.4–80.2)75.4 (70.0–80.8)0.792Malaria test prior to ADDO Visit (%, 95% CI)16.7 (6.6–26.8)21.7 (9.6–33.9)3.4 (0.0–8.1)0.01511.1 (6.8–15.4)13.4 (9.0–17.8)14.9 (10.2–19.7)0.502\n\n\nBODY.RESULTS.DIAGNOSTIC UPTAKE:\nPre-intervention, no patients present in the ADDOs reported receiving a parasite-based test in an ADDO in any district. Post-intervention, 65% of patients present at ADDOs in the intervention districts (95% CI 60.8–69.6%) and 3% in the control district received an RDT (Fig. 2a). There was no difference in the fraction of eligible patients who were tested between the subsidized (66%, 95% CI 59–72%) and unsubsidized districts (67%, 95% CI 58–71%), (χ\n2 < 0.1, p = 0.8).Fig. 2\na Proportion of patients present at the ADDO who received a parasite-based diagnosis before and after intervention (p < 0.001 for changes in intervention districts; p = 0.159 for the control district). b Proportion of patients present in the ADDO reporting receiving parasite-based diagnostics at any location (p < 0.05 for pre- to post-intervention changes within each district)\n Overall diagnostic uptake (in ADDOs or elsewhere prior to the ADDO visit) improved significantly in the intervention and control groups during the study (Fig. 2b). The proportion of patients present at the ADDO who received a diagnostic test increased from 19% pre-intervention to 74% post-intervention in the intervention districts and from 3 to 18% in the control district (p < 0.01 for both groups). The relative changes in testing were not significantly different between the intervention and control districts (OR = 2.5, p = 0.271). Median prices for RDTs post-intervention reported by customers were 1100 Tanzanian Shillings in Kilosa, the unsubsidized district, and 500 Tanzanian Shillings in Kilombero, the subsidized district. There was no variance in median price from the recommended retail price in either district. No difference in RDT uptake was observed between the unsubsidized and subsidized districts (OR = 1.1, p = 0.868). Patient knowledge and practices were associated with RDT purchasing (Table 4). The odds of purchasing an RDT were significantly higher for patients who had previous experience taking an RDT or were already aware that RDTs were available before coming to the ADDO, while patients who had sought treatment elsewhere before visiting the ADDO were significantly less likely to purchase an RDT.Table 4Factors associated with purchasing an RDT in the ADDO when the patient was presentOutcome: receiving an RDT in an ADDO if present to be tested (n = 464)BivariableMultivariableOR95% CIp valueOR95% CIp valueIntervention No subsidyReference Subsidy1.1(0.6–1.9)0.868Customer gender (if not patient) FemaleReference Male1.2(0.5–2.7)0.654Patient gender FemaleReference Male1.0(0.7–1.4)0.920Customer age (if not patient)1.0(1.0–1.0)0.198Patient age <5 yearsReference 5− <14 years1.5(0.7–3.5)0.332 14+ years0.7(0.3–1.3)0.254Customer education PrimaryReference Secondary and above1.0(0.6–1.6)0.909Wealth index LowestReference Lower1.0(0.5–2.0)0.925 Middle0.9(0.4–1.8)0.723 Higher1.2(0.6–2.5)0.559 Highest1.0(0.5–2.2)0.943Has heard of RDTs3.1(2.0–4.8)<0.0011.7(0.9–3.1)0.088Has taken an RDT before3.1(2.1–4.7)<0.0011.9(1.2–3.0)0.005Sought treatment prior to ADDO visit0.3(0.2–0.5)<0.0010.3(0.2–0.5)<0.001Knew before coming that testing was available in ADDOs3.0(2.0–4.7)<0.0011.9(1.1–3.3)0.032Believes febrile patients should be blood tested before treating NeverReference Sometimes1.1(0.1–8.9)0.925 Always2.0(0.3–13.9)0.474\n\n\nBODY.RESULTS.TREATMENT DECISIONS:\nThe overall proportion of patients who received an ACT increased from 40 to 43% in the control district and decreased from 40 to 32% in the intervention districts, although these changes were not significantly different after controlling for other factors (OR = 0.6, p = 0.294). Forty-one percent of patients who received an RDT tested positive for malaria across the intervention districts (95% CI 35.7–46.8%). Among patients with a positive test, 90% (95% CI 81.9–94.9%) received an anti-malarial, and 75% (95% CI 65.5–83.5%) received ACT. Among customers testing negative, 7% (95% CI 3.6–12.3%) received an anti-malarial (Fig. 3a). There was no significant difference in the proportion of those who did not receive an RDT who purchased an anti-malarial (35% in intervention districts, 41% in control) (Fig. 3b).Fig. 3\na Medications purchased by patients tested in ADDOs (intervention districts combined). b Medications purchased by untested ADDO patients. Percentages are not mutually exclusive; patients may have received more than one treatment type\n The proportion of patients who received an antibiotic did not change significantly comparing intervention and control groups from pre- to post-intervention (OR = 0.9, p = 0.837). Post-intervention, the proportion of patients purchasing antibiotics also did not vary significantly comparing those who tested positive with those who tested negative (10% vs 9%, p = 0.73) (Fig. 3a).\n\nBODY.DISCUSSION:\nThe results from this study suggest that ADDOs offer an important opportunity for improving malaria case management in Tanzania towards the national goal of 80% parasite-based diagnosis of suspected malaria patients. Two-thirds of suspected malaria patients who sought treatment at ADDOs with RDT-certified dispensers purchased RDTs. Improved testing rates through ADDOs in this pilot project resulted in improved targeting of ACTs to patients with malaria, with 90% of malaria-positive and 93% of malaria-negative ADDO-tested patients making appropriate treatment decisions according to their test results. These results add to a growing body of evidence that malaria case management can be performed well outside of the formal public health sector. A pilot in Ghana that measured vendor-reported adherence to test results after the introduction of RDTs to 28 Licensed Chemical Sellers (private retail sector shops) found 89% of positives and 3% of negatives received ACT [24]. Vendors of 59 licensed drug shops in Uganda also reported high uptake (98%) and favorable adherence to test results (ALu was prescribed to 98% of positives and 1% of negatives) in a small-scale pilot study [25]. The higher uptake observed in these studies compared with this one may be attributable to self-reporting by drug shop vendors compared with customer interviews, a lower $0.20 price point (Uganda) compared to $0.31–0.67 in Tanzania, and/or variance in length of training and frequency of supervision. Not all studies have found such positive outcomes, however: a different study in Uganda with 92 shops selling RDTs that measured adherence through household surveys found only 32% of positives purchased an ACT medicine (7% of negatives) at a median $0.40 per test [16]. This pilot was successful in improving case management practices for several reasons. First, ADDOs are a highly utilized source for anti-malarials in Tanzania: ADDOs were the first place treatment was sought for 75% of interviewed patients, emphasizing the need for access to parasite-based diagnosis at this point of care. Second, nearly all ADDOs in the study area participated in the trainings and RDT program. ADDOs were kept engaged in the study through quarterly supervision and monitoring visits, which also allowed for frequent quality assessments on safety protocol. Third, proximity of the region to Dar es Salaam and relatively good road infrastructure allowed for existing supply chains to be utilized, creating a more realistic market scenario. Finally, negotiations with RDT suppliers and agreements with wholesalers and ADDOs on mark ups, ensured affordable recommended retail prices and enabled comparison of price points. These results provided data to help guide several open questions surrounding an introduction of RDTs to the private retail market. A similar proportion of patients were willing to pay the higher 'unsubsidized' price of a 1100 Tanzanian Shillings or USD $0.67 for a RDT compared to the lower unsubsidized price of 500 Tanzania Shillings or USD $0.32 for the subsidized RDT, indicating that a subsidy may not be necessary to encourage testing uptake. Additionally, similar to the Ghana pilot [24], an increase in the purchase of antibiotics among those receiving a negative test result was not observed in this study. The study was also subject to several limitations. A significant pre-intervention difference in parasite-based diagnostic rates between intervention (19%) and control (3%) districts suggests inherent differences in health facility testing uptake that may confound the effect of the intervention. A second limitation was the unforeseen mobility of ADDO dispensers, periodically changing jobs work in different ADDOs during the study period. While dispensers were certified in this study, it was unclear if the certification extended to the ADDO when the dispenser was not present. Dispenser movement complicated follow-up and suggests future consideration for specifying that dispensers and ADDOs be certified, and requiring ADDOs that sell RDTs to have a certified dispenser present. Additional follow-up challenges arose when ADDOs closed temporarily or permanently, or were not reachable for stock delivery or enumeration due to flooded roads and bridges. Also, although dispensers were trained to refer severe and negative cases to the nearest health facility, this study was not resourced to follow the outcomes of these patients. In practice, it is unlikely that all negative cases will complete the referral pathway, underscoring the importance of including diagnosis and treatment of non-malaria febrile illness and recognition of danger signs in ADDO dispenser training. Lastly, the post-intervention survey sampled from ADDOs that participated in the training in the intervention areas, and while these were the majority of the ADDOs in these districts, they are not representative of all ADDOs in the study area. As Tanzania uses the results of this study to inform policy decisions, it will be important to consider how results might vary in different endemic settings and in more urban areas. Expansion of this program will also require evaluation of how best to ensure long-term routine supervision and training, as well as waste management for sharps and cassettes. Finally, a critical challenge to scale-up will be integration of the private retail sector into the routine surveillance system for capturing malaria case data. This information will be imperative for strategic decision-making as malaria prevalence declines across the country.\n\nBODY.CONCLUSIONS:\nThis pilot study contributes to a growing evidence base that introducing RDTs to the private retail sector in low resource settings can increase parasite-based diagnostic rates for malaria and adherence to test results when dispensers are trained and supervised. The pilot study also showed that RDTs can be introduced under real market conditions; utilizing existing supply chains without subsidizing the cost of the RDT. While the intervention piloted here was limited to a rural setting in Tanzania's specialized ADDO network, it provides a basis for policy decisions on scaling up RDT access across Tanzania to increase parasite-based diagnosis and rational use of ACT through heavily used existing channels. The results presented here support the goals for Tanzania's National Strategic Plan and may have applications in similar settings.\n\n**Question:** Compared to Before trainings covered signs and symptoms of uncomplicated and severe malaria, stocking, use, and disposal of RDTs, and appropriate case management based on RDT results. what was the result of After trainings covered signs and symptoms of uncomplicated and severe malaria, stocking, use, and disposal of RDTs, and appropriate case management based on RDT results. on The proportion of patients present at the ADDO who received a diagnostic test ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
874
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Potential benefit of bosentan therapy in borderline or less severe pulmonary hypertension secondary to idiopathic pulmonary fibrosis—an interim analysis of results from a prospective, single-center, randomized, parallel-group study\n\n ABSTRACT.BACKGROUND:\nNo drugs have been approved for the treatment of patients with pulmonary hypertension (PH) secondary to idiopathic pulmonary fibrosis (IPF), particularly those with idiopathic honeycomb lung. This study was conducted to investigate the long-term efficacy and safety of bosentan for PH based on changes in prognosis and respiratory failure.\n\nABSTRACT.METHODS:\nIPF patients with borderline or less severe PH and completely organized honeycomb lung were randomized (1:1) to bosentan or no treatment for PH for 2 years and assessed at baseline and every 6 months for respiratory failure, activities of daily living (ADL), lung and heart functions by right cardiac catheterization, and other parameters. An interim analysis was performed, however, following detection of a significant survival benefit favoring bosentan therapy.\n\nABSTRACT.RESULTS:\nSignificant differences were noted for the bosentan-treated (n = 12) vs. untreated (n = 12) groups in hospital-free survival (603.44 ± 50.074 days vs. 358.87 ± 68.65 days; hazard ratio [HR], 0.19; P = 0.017) and overall survival (671 days vs. 433.78 ± 66.98 days; HR, 0.10; P = 0.0082). Again, significant improvements were noted for the bosentan-treated group from baseline to month 6 or 12 in several indices in ADL, pulmonary circulation, and %DLCO. Without requiring O2 inhalation, bosentan was associated with no increase but a trend toward a decrease in adverse events and an improvement in respiratory status.\n\nABSTRACT.CONCLUSIONS:\nBosentan tended to improve prognosis and ADL without worsening respiratory failure in IPF patients with borderline or less severe PH and completely organized honeycomb lung alone.\n\nABSTRACT.TRIAL REGISTRATION:\nThis study was registered on December 18, 2010 with UMIN-CTR Clinical Trial as UMIN000004749 to investigate the long-term influence of bosentan on cardiac function, as well as its cardioprotective efficacy and safety, in patients with pulmonary hypertension secondary to concurrent COPD and IPF, respectively.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi: 10.1186/s12890-017-0523-2) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nIdiopathic pulmonary fibrosis (IPF) is a disorder associated with poor prognosis. Pulmonary arterial hypertension (PAH), which is likely to lead to right heart overload, is also associated with poor prognosis. Patients with IPF are at risk of developing pulmonary hypertension (PH) as the underlying condition worsens or becomes severer, thus further compromising their prognosis [1–8]. However, it remains unclear how long it may typically take for patients with IPF to start developing PH or for its associated influence on cardiac function to become manifest [1]. Given that, once elevated, pulmonary arterial pressure (PAP) becomes irreversible because of established vascular remodeling [1, 9], it appears that efficacious treatment should be started before its onset. Furthermore, it is assumed that cardiac overload starts even before the onset of PH. However, to date, no drugs have been approved for the treatment of PH secondary to respiratory diseases, such as IPF [1]. Drugs specific for PAH, such as bosentan (Tracleer Tablets®), have been reported in some studies to effectively improve PH in patients with respiratory diseases, such as COPD and IPF [1, 10–13]. While the results of randomized controlled studies conducted to date, such as BUILD-1 and BUILD-3 studies [11], appear to argue against the use of bosentan in patients with pulmonary hypertension (PH), these studies involved a wide range of patients from those with fibrotic idiopathic interstitial pneumonia (f-IIP) and fibrotic nonspecific interstitial pneumonia (f-NSIP) to those with highly elevated pulmonary arterial pressure (PAP) and decreased cardiac index (CI), where the presence of multiple risk factors for IPF in these patients may have additively or synergistically contributed to the therapeutic outcomes reported in these studies. Again, the benefit of early intervention with bosentan may not have been sufficiently explored in those with mildly elevated PAP in these trials, while bosentan was indeed shown to be efficacious against IP in a subset of patients in the BUILD-1 study, despite the observation that many patients with IPF are associated with rapidly elevated PAP as well as progression of IPF and that elevated PAP is associated with poor prognosis [7]. Against this background, the present bosentan study focused on IPF patients with completely organized honeycomb lung without any pulmonary (including IP) lesions who chiefly complained of symptoms suggestive of progressive respiratory failure, i.e., progressive dyspnea with minimal IPF activity. Thus, the IPF patients with completely organized honeycomb lung alone were enrolled in this study to ensure that the subjects in this study had pathologically stable IPF and required regular hospital visits for treatment. An interim analysis was performed, however, in patients with borderline or less severe PH (25 mmHg ≤ mean pulmonary arterial pressure [mPAP] at rest <35 mmHg and/or mPAP on effort [mPAPOE] ≥ 30 mmHg), following detection of a greater-than-expected significant survival benefit in patients with borderline or less severe PH treated with bosentan at an early phase of the trial when the number of patients enrolled was still small.\n\nBODY.METHODS.STUDY DESIGN AND METHODS:\nThis was a prospective, single-center, interventional, parallel, randomized, open-label study.\n\nBODY.METHODS.TARGET PATIENT POPULATION:\nThis study was conducted in patients with IPF (WHO functional class II, III or IV) who showed no signs of hypoxia during 6-min walk test (6MWT) and therefore had enough functional capacity for ADL and who gave written informed consent to participate in the study.\n\nBODY.METHODS.ELIGIBILITY CRITERIA:\nTo be included in this study, patients had to fulfill all of the following inclusion criteria but none of the following exclusion criteria:\n\nBODY.METHODS.INCLUSION CRITERIA:\n\nPatients aged 20 years old or older (both sexes)Patients diagnosed at this hospital as having IPF (WHO functional class II, III or IV) without hypoxia at rest or during 6MWT (to exclude those with decreased ADL and dyspnea in daily living associated with hypoxia and to minimize the influence of hypoxic pulmonary vasoconstriction [HPV] as a potential cause of PH associated with decreased partial pressure of oxygen in arterial blood [PaO2]) (PaO2 < 60 mmHg)*.*Including those whose hypoxia (PaO2 < 60 mmHg) had been corrected with long-term oxygen therapy (LTOT)Patients with stable IPF who had not required any change of treatment within 3 months prior to study entry, i.e., those confirmed to have completely organized honeycomb lung and no active inflammatory lesion, such as grand glass opacity (GGO) (chronic IIP based on high-resolution computed tomography (CT) findings for which no effective therapy exists); and who presented to our hospital for the first time with symptoms of progressive respiratory failure and had not received any medical treatment for IPF within 3 months prior to their visit.*Excluding those whose progressive respiratory failure required no treatment for IPF itself and those who had an increased LTOT dose as a minimum requirement for progressive respiratory failure.Inpatients and outpatientsPatients who provided written informed consent to participate in this study\n\n\nBODY.METHODS.EXCLUSION CRITERIA:\n\nPatients who had received bosentan or any other drug specific for PAH (e.g., phosphodiestetrase type 5 [PDE-5] inhibitors, endothelin receptor antagonists, or prostaglandin analogs) prior to their enrollmentPatients with any disease that could cause right heart overloadPatients with hypoxia during 6MWT (PaO2 < 60 mmHg)*.Excluded were those whose hypoxia (PaO2 < 60 mmHg) had been corrected with LTOT (i.e., those in whom LTOT is in place to ensure PaO2 > 60 mmHg both at rest and during 6MWT, who were deemed equivalent to IPF patients receiving routine therapy in clinical practice to allow them to be monitored for changes in their condition, prognosis and functional capacity for ADL).\nWomen who were pregnant or might have been pregnant, and who were lactatingPatients with moderate or severe liver disorderPatients receiving treatment with cyclosporine, tacrolimus, or glibenclamideOther patients judged by the investigator to be ineligible for this study (e.g., those with any disease or condition other than IPF that might affect their ADL, such as arrhythmia, LV failure, pulmonary thromboembolism, connective tissue diseases, intervertebral disc herniation, as they were confirmed by history taking, physical examination, chest x-ray, echocardiography [ECG], lung perfusion scintigraphy, and measurements of various parameters conducted during the run-in period).\n\n\nBODY.METHODS.GROUPING OF PATIENTS:\nIn order to include those with minimal IPF activity alone, of all patients first diagnosed with IPF at our hospital based on the presence of chronic f-IIP as confirmed by high-resolution CT findings, those whose chief complaints suggested progressive respiratory failure and who were suffering from progressive dyspnea were evaluated for PAP by right heart catheterization (RHC) and ECG, as well as for right heart function by ECG. According to the current diagnostic criteria, if mPAP at rest is <25 mmHg, the patient is not diagnosed as having PH even if the mPAPOE is ≥30 mmHg. In our study, however, this state was defined as borderline PH representing a very mild form of PH; and besides, 25 mmHg ≤ mPAP at rest <35 mmHg was defined as less severe PH. Since the aim of this study was to evaluate the efficacy and safety of early therapeutic intervention with bosentan in PH, borderline PH or PH was diagnosed if mPAP at rest was ≥25 mmHg and/or mPAPOE was ≥30 mmHg and mPAWP was ≤15 mmHg. Moreover, borderline PH or less severe PH was defined as mPAP <25 mmHg and mPAPOE ≥30 mmHg or 25 mmHg ≤ mPAP <35 mmHg; severe PH was defined as mPAP ≥35 mmHg; and non-borderline PH or PH was defined as non-PH (mPAP <25 mmHg and mPAPOE <30 mmHg).\n\nBODY.METHODS.GROUPING OF PATIENTS.DRUG-TREATED AND UNTREATED PATIENTS:\nAll patients who met the eligibility criteria and gave informed consent to participate in this study were evaluated for PAP and right heart function and those with mPAP at rest ≥25 mmHg and/or mPAPOE ≥30 mmHg were stratified by mPAP: mPAP at rest <35 mmHg vs. ≥ 35 mmHg. Patients in each subgroup were randomly allocated to either bosentan (drug-treated group) or no treatment (untreated group) by the envelope method. Patients diagnosed with non-PH (mPAP at rest <25 mmHg with mPAPOE <30 mmHg) were assigned to the untreated group. The drug-treated group comprised those who were diagnosed at this hospital as having IPF without hypoxia (PaO2 > 60 mmHg) and who gave informed consent to participate in this study after PAP and right heart function assessments. The untreated group comprised those diagnosed at this hospital as having IPF without hypoxia (PaO2 > 60 mmHg) and who gave their informed consent to participate in this study after PAP and right heart function assessments. This group included those with severe PH (mPAP at rest ≥35 mmHg), borderline or less severe PH (25 mmHg ≤ mPAP at rest <35 mmHg and/or mPAPOE ≥30 mmHg), and non-PH (without borderline PH or PH). (Fig. 1; see also Additional file 1).Fig. 1Patient flowchart\n An interim analysis was performed, however, in patients with borderline or less severe PH (25 mmHg ≤ mPAP at rest <35 mmHg and/or mPAPOE ≥30 mmHg), following detection of a greater-than-expected significant survival benefit in patients with borderline or less severe PH treated with bosentan, at an early phase of the trial when the number of patients enrolled was still small. The study required that IPF patients be randomized to drug-treated and untreated groups to investigate their clinical course in real-world settings, with no change of treatment allowed including bosentan for 2 years or until their death as a rule, except for minimal symptomatic therapy (including oxygen volume adjustments required to ensure similar oxygen conditions among the patients), which met none of the exclusion criteria.\n\nBODY.METHODS.TARGET SAMPLE SIZE:\nSee Additional file 2.\n\nBODY.METHODS.OUTCOME MEASURES:\nECG examinations were carried out during the run-in period* and every 6 months thereafter**. Complete two-dimensional, pulsed-wave, color-flow echocardiography was performed using the Toshiba ultrasound system Xario (TOSHIBA MEDICAL SYSTEMS CORPORATION, Tochigi, Japan) as previously described [14–25]. (See also Additional file 3). Doppler measurements were carried out during the run-in period* and every 6 months**. (See also Additional file 3 and Additional file 4: Fig. S1). RHC was carried out during the run-in period* and every 6 months thereafter**. Hemodynamic parameters (systolic PAP [SPAP]; diastolic PAP [DPAP]; mean PAP [mPAP]; systolic PAWP [SPAWP]; diastolic pulmonary capillary wedge pressure [DPAWP]; mean PAWP [mPAWP]; systolic right ventricular pressure [SRVP]; diastolic RVP [DRVP]; mean RVP [mRVP]; systolic right atrial pressure [SRAP]; diastolic RAP [DRAP]; mean RAP [mRAP]; and cardiac output [CO]) and pulmonary vascular resistance (PVR) were measured, with the patient in the supine position, via the internal jugular vein and using a Swan-Ganz continuous cardiac output (CCO) thermodilution flow-directed pulmonary artery catheter (Edwards Lifesciences LLC, USA). Cardiac output was measured by the thermodilution method using a Vigilance hemodynamic monitor (Edwards Lifesciences LLC, USA). Systolic PAP on effort (SPAPOE), diastolic PAP on effort (DPAPOE) and mean PAP on effort (mPAPOE) were measured while patients were clasping and opening both hands repeatedly by putting a full strain on the body. Furthermore, mixed venous blood gas analysis was performed.\n\nBODY.METHODS.SURVIVAL ANALYSIS:\nHospital-free survival and overall survival were determined by the duration of survival from week 0 (start of assessment), i.e., as the date treatment started for the drug-treated group and 2 weeks after RHC for the untreated group. Even for those unable to undergo the periodic assessments due to change of their attending physician, etc., this survival analysis was continued by contacting the patient's current physician to have his/her survival status confirmed. Patients were censored from hospital-free survival if they could no longer continue ambulatory treatment and were admitted to another hospital or if they could no longer present to our hospital for progression of respiratory failure.\n\nBODY.METHODS.ADVERSE EVENTS:\nAll patients were assessed for adverse events during the run-in period* and every 4 weeks thereafter**, as well as based on their medical records on unscheduled visits to our outpatient clinic. Even for those unable to undergo the periodic assessments due to change of their attending physician, adverse events were assessed by contacting their current physician to have these events confirmed.\n\nBODY.METHODS.OTHER PARAMETERS:\nPulmonary function test (PFT) was carried out during the run-in period* and every 6 months thereafter**. (See also Additional file 3). ADL assessments including exercise tolerance test [26–28] were performed during the run-in period* and every 6 months thereafter**. (See also Additional file 3 and Additional file 5: Figure S2). Those in whom LTOT was in place to ensure adequate oxygen inhalation during 6MWT (deemed equivalent to IPF patients receiving routine therapy in clinical practice to allow them to be monitored for changes in their condition, prognosis and functional capacity for ADL) were assessed for treadmill exercise test (TMET) with LTOT in place. Arterial blood gas (ABG), arterial plasma lactate, brain natriuretic peptide (BNP) and N-terminal (NT)-proBNP were determined during the run-in period and every 6 months thereafter. [29] (See also Additional file 3). Hematology, biochemistry and urinalysis were performed during the run-in period* and every 4 weeks thereafter**. *Run-in period: Within 2 weeks after written informed consent was obtained from each patient. **Every 6 months: Consecutive 6 months with a ± 1-week window counting from week 0 (start of assessment) defined as the date drug treatment started for the drug-treated group and 2 weeks after RHC for the untreated group. However, the periodic assessments not conducted in patients as planned based on the attending physician's judgement were deemed acceptable, unless they met any of the criteria for discontinuation of the study (e.g., pneumonia, etc.) (Fig. 1 and Additional file 6: Figure S3).\n\nBODY.METHODS.STUDY DRUG:\nBosentan was administered, as a rule, according to the approved dosage and administration. Bosentan is to be usually initiated in adults at a dose of 62.5 mg twice daily after breakfast and dinner for 4 weeks and increased to a dose of 125 mg twice daily after breakfast and dinner from week 5 of treatment onwards with the dosage adjusted according to the patient's symptoms and tolerability, but not exceeding 250 mg per day. In this study conducted in routine clinical settings, however, it was acceptable to continue treatment at the initial dosage if deemed by the investigator to be appropriate based on the patient's condition (see also Additional file 7).\n\nBODY.METHODS.CONCOMITANT DRUGS AND THERAPIES:\nDrugs allowed for use in the study included drugs intended for the treatment of the underlying disease (IPF) and drugs, other than drugs specific for PAH, for the treatment of PH as required for aggravation of PH. Drugs prohibited for use included cyclosporine, tacrolimus, glibenclamide and other drugs specific for PAH (e.g., PDE-5 inhibitors, endothelin receptor antagonists and prostaglandins) as well as any other investigational drug.\n\nBODY.METHODS.STUDY PERIOD:\nThe study was conducted for 24 months between September 2010 and September 2022 with patient enrollment lasting until January 2020 (see Additional file 8).\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData are expressed as the mean ± standard deviation (SD). Changes from baseline in individual outcome measures were compared between drug-treated and untreated patients, and analyzed for statistical significance. Analysis on paired data was performed using Mann-Whitney U test. Changes in trend over time were analyzed using the Residual Maximum Likelihood (REML) or least squares method. All statistical analyses were performed using JMP version 11.2.1 (SAS Institute Inc., Cary, NC). A two-sided P value of <0.5 was considered to indicate a statistically significant change.\n\nBODY.RESULTS.PATIENTS:\nThis report presents the results of an interim analysis of the IPF patients in this study. A total of 32 IPF patients were enrolled in this study between February 2011 and June 2016, who comprised all the outpatients who had met the study entry criteria. At the time of their initial presentation to our hospital, all patients were confirmed to have chronic fibrotic idiopathic interstitial pneumonia (IIP) based on high-resolution CT findings of completely organized honeycomb lung with basal predominance in bilateral subpleural regions for which no effective therapy exists. Patients chiefly complained of symptoms of progressive respiratory failure. While all patients confirmed to have no progressive pulmonary fibrosis on CT were given detailed explanations as to the potential adverse effects associated with the use of antifibrotics, such as pirfenidone or nitentanib, which has only recently been launched in Japan and indicated for very few patients, as well as the costs due under current health insurance, none were confirmed to have received any treatment specific for IPF (e.g., pirfenidone or nintentanib) within 3 months prior to their enrollment or wished to receive any antifibrotic drug after the first 3 months or later, and none dropped out because of any treatment given, other than symptomatic treatment, for PH as the underlying disease, such as calcium channel blockers. Of these 32 patients, the following 3 patients were excluded from the study: 1 who was found to have cancer during the study, which had probably existed at the time of enrollment (untreated, borderline or less severe PH group), 1 who developed symptoms of disc hernia during the run-in period (non-PH group), and 1 who died from aspiration pneumonia during the run-in period before the start of bosentan therapy (drug-treated, borderline or less severe PH group). The remaining 29 patients who had completed the study or were still on the study treatment were included in the present analyses. Of these 29 patients, 3 (including 1 female) had no borderline PH or PH (non-borderline PH/PH) and the remaining 26 patients with boarderline or less severe PH, or severe PH were randomized to receive or not to receive bosentan therapy. Of these, 13 were in the drug-treated group and the other 13 were in the untreated group, including 1 in each group confirmed to have mPAP at rest ≥35 mmHg (severe PH), and 12 in the drug-treated group (age range, 56–76 years old) and 12 in the untreated group (age range, 51–80 years old) confirmed to have mPAP at rest <35 mmHg (borderline or less severe PH). Patient demographics and characteristics were similar between the untreated, borderline or less severe PH group and the drug-treated, borderline or less severe PH group (Table 1).Table 1Clinical characteristics of subjects with borderline or less severe PH (mPAP <35 mmHg)Untreated borderline or less severe PHDrug-treated borderline or less severe PH\nP\n*\nNo. (male/female)12(8/4)12(9/3)0.66Age (y.o.)70.50 ± 7.9766.92 ± 6.450.11Height (cm)160.04 ± 10.11160.87 ± 10.070.84Weight (kg)62.067 ± 12.1754.95 ± 12.720.25No. of patients with LTOT870.67ADL including exercise tolerance testWHO functional class2.67 ± 0.782.83 ± 0.830.78mMRC score2.42 ± 1.0842.33 ± 1.440.98SGRQ score Symptoms56.10 ± 22.8745.78 ± 28.960.52 Activity61.60 ± 22.3555.18 ± 33.210.98 Impact37.53 ± 23.1134.38 ± 22.180.91 Total49.42 ± 21.2743.93 ± 26.410.77SF36 Physical functioning (PF)45.83 ± 21.4160.42 ± 26.410.11 Role physical (RP)38.58 ± 21.9651.058 ± 39.160.56 Bodily pain (BP)72.17 ± 26.3080.00 ± 25.230.45 General health (GH)40.67 ± 18.3446.75 ± 20.0670.49 Vitality (VT)49.34 ± 20.5558.36 ± 29.230.40 Social functioning (SF)56.25 ± 26.3868.75 ± 33.500.35 Role emotional (RE)62.51 ± 30.04867.36 ± 37.690.52 Mental health (MH)63.75 ± 19.6765. 00 ± 27.880.62Right heart cardiography mPAP (mmHg)20.83 ± 5.7521.17 ± 7.730.93 mPAPOE (mmHg)42.67 ± 12.7842.58 ± 8.870.45 mPAWP (mmHg)6.83 ± 3.796.28 ± 3.570.76 mRVP (mmHg)14.42 ± 3.5814.083 ± 6.570.31 mRAP (mmHg)2.50 ± 1.683.00 ± 2.130.45 CO (L/min)4.80 ± 1.125.10 ± 1.270.82 CI (L/min/m2)2.90 ± 0.563.21 ± 0.630.38 PVR (wood)3.12 ± 1.653.022 ± 2.00310.95 PVRI5.073 ± 2.674.58 ± 2.611.00Mixed venous PHv7.39 ± 0.0297.40 ± 0.0260.45 PvCO2 (mmHg)49.34 ± 6.03648.15 ± 4.220.82 PvO2 (mmHg)36.69 ± 3.8937.55 ± 3.930.60 SVO2 (%)68.77 ± 5.4770.53 ± 5.710.66PFT %VC (%)68.34 ± 16.9269.55 ± 22.620.98 FVC (L)2.0033 ± 0.572.087 ± 0.800.86 %DLCO (%)30.72 ± 16.001927.37 ± 23.760.25TTE ET (msec)299.71 ± 55.95263.83 ± 36.160.18 PAAcT (msec)98.67 ± 32.6594.75 ± 11.650.58 AcT/ET0.33 ± 0.0870.37 ± 0.0600.23 PEP (msec)92.71 ± 12.6587.42 ± 18.620.12 ICT (msec)17.083 ± 19.9621.75 ± 21.730.70 IRT (msec)55.71 ± 45.002352.42 ± 36.490.95 ICT + IRT (msec)87.79 ± 64.5472.82 ± 46.340.69 TEI index0.32 ± 0.270.30 ± 0.250.98 TAPSE(cm)2.32 ± 0.472.27 ± 0.550.75 Diastolic RA area (cm2)8.20 ± 3.2110.64 ± 4.910.29 Diastolic RA major axis (cm)4.35 ± 2.00374.20 ± 2.0180.66 Systolic RA area (cm2)4.74 ± 2.105.46 ± 2.600.60 Systolic RA major axis (cm)2.95 ± 0.992.65 ± 0.550.25 Diastolic RV area (cm2)16.090 ± 6.5715.39 ± 7.960.33 Diastolic RV major axis (cm)6.38 ± 1.286.22 ± 1.0980.49 Systolic RV area (cm2)9.27 ± 3.479.38 ± 4.380.64 Systolic RV major axis (cm)4.98 ± 1.424.95 ± 0.890.60 RVEF (%)58.91 ± 12.4551.98 ± 13.290.13Aortic Blood data at rest pH7.41 ± 0.0277.42 ± 0.0220.25 PO2 (mmHg)76.84 ± 10.09182.46 ± 7.930.11 Aortic oxygen saturation (%)95.02 ± 1.5595.85 ± 1.200.12 BNP (pg/ml)29.42 ± 20.2620.76 ± 13.100.34 NT-proBNP (pg/ml)93.33 ± 60.1569.67 ± 48.210.45 LA (mg/dl)11.82 ± 4.08210.00 ± 3.530.33TMET METS3.55 ± 1.893.96 ± 2.540.81Post-TMET Aortic Blood data Post-TMET pH7.34 ± 0.0617.36 ± 0.0690.45 Post-TMET PCO2 (mmHg)46.94 ± 12.2243.017 ± 6.750.66 Post-TMET PO2 (mmHg)54.075 ± 15.9367.23 ± 14.710.18 Post-TMET oxygen-Sat (%)80.35 ± 18.07790.85 ± 4.420.14 Post-TMET BNP (pg/ml)40.20 ± 34.8835.62 ± 46.660.27 Post-TMETNT-proBNP (pg/ml)102.83 ± 67.48108.67 ± 124.950.64 LA post TMET – LA at rest (mg/dl)24.68 ± 20.01222.82 ± 18.880.98 6MWD246.18 ± 104.27296.63 ± 128.00900.31Post-6 MW Aortic Blood data Post-6MWT pH7.39 ± 0.0217.40 ± 0.0390.15 Post-6 MW-PCO2 (mmHg)41.042 ± 8.3242.64 ± 5.420.53 Post-6 MW-PO2 (mmHg)77.20 ± 30.9872.067 ± 15.790.91 Post-TMET Oxygen-Sat (%)92.58 ± 4.1390.00 ± 8.350.69 Post-6 MW-BNP (pg/ml)34.52 ± 25.6625.080 ± 23.950.33 Post-6 MW-NT-proBNP (pg/ml)98.50 ± 75.1380.67 ± 72.410.47 LA post-6 MW – LA at rest (mg/dl)8.60 ± 8.315.42 ± 8.340.14Data presented as mean ± SD\n*\nP value for Mann-Whitney U test to assess the difference between the untreated and drug-treated patients with borderline or less severe PH\n\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .EXACERBATION OF SUBJECTIVE SYMPTOMS OF DYSPNEA (TABLE :\nOf the 12 untreated patients with borderline or less severe PH, 7 were confirmed to have experienced exacerbation of subjective symptoms of dyspnea based on the data obtained at the cut-off date, with the time to exacerbation of dyspnea being 152.00 ± 89.94 days (mean ± SD). Of 12 the drug-treated patients with borderline or less severe PH, 3 were confirmed to have experienced exacerbation of dyspnea based on the data obtained at the cut-off date, with the time to exacerbation being 259.00 ± 49.87 days (mean ± SD). Proportional hazard analysis showed that the risk ratio of the drug-treated group to the untreated group was 0.32, but with no significant difference.Table 2Adverse events observed in untreated and drug-treated patients with borderline or less severe PHUntreated borderline or less severe PHDrug-treated borderline or less severe PHExacerbation of dyspnea73Time to exacerbation of dyspnea (mean ± SD) (days)152.00 ± 89.94259.00 ± 49.37Increase of the O2 dose52Time to O2 dose increase (mean ± SE) (days)199.00 ± 132.90,335.00 ± 182.43Decrease of the O2 dose01Hospitalization (hospital-free survival)8 (241.50 ± 192.24)2 (239.002 ± 169.00)Death (survival)7 (309.29 ± 195.13)1 (671)Other adverse events3a\n6b\n\na Gastrointestinal hemorrhage (n = 1), pneumonia (n = 1), and ileus (n = 1)\nb Pneumothorax (n = 3), CHF (n = 2), and liver dysfunction (n = 1)\nFig. 2Analysis of survival by adverse event. a Analysis of the time to exacerbation of subjective dyspnea. Among the untreated patients with borderline or less severe PH, the time to exacerbation of dyspnea was 152.00 ± 89.94 days (mean ± SD) in 7 of 12 patients confirmed to have experienced exacerbation of subjective symptoms of dyspnea by the data obtained on the cut-off date. Among the drug-treated patients with borderline or less severe PH, the time to exacerbation of dyspnea was 259.00 ± 49.87 days (mean ± SD) in 3 of 12 patients confirmed to have experienced exacerbation of dyspnea by the data obtained on the cut-off date. Proportional hazard analysis showed that the risk ratio of the drug-treated to untreated groups was 0.58, but with no significant difference noted. The time to exacerbation of dyspnea at the time of analysis was 218.17 ± 35.62 days (mean ± SE) in the untreated group and 290.71 ± 12.036 days in the drug-treated group, but with no significant difference noted. b Analysis of the time to an increase in the dose of O2 (event). Increase of the O2 dose: In the untreated patients with borderline or less severe PH, the time to the dose increase was 199.00 ± 132.90 days (mean ± SD) in 5 of 12 patients confirmed to have required an increase of the dose of O2 by the data obtained on the cutoff date. In the drug-treated patients with borderline or less severe PH, the time to the dose increase was 335.00 ± 182.43 days (mean ± SD) in 3 of 12 patients confirmed to have required an increase of the O2 dose based on the data obtained on the cut-off date. The risk ratio analysis showed that the hazard ratio of the drug-treated to untreated groups was 0.58. The time to O2 dose increase at the time of analysis was 357.71 ± 50.83 days in the untreated group and 438.20 ± 34.61 days in the drug-treated group with no significant difference between the groups, despite the results favoring the drug-treated group. In addition, only 1 drug-treated patient with borderline or less severe PH achieved a decrease of the O2 dose on day 243 due to an improvement of respiratory function. c Hospital-free survival. Of the 12 untreated patients with borderline or less severe PH, 8 were confirmed to have been hospitalized (event) by the data obtained on the cut-off date with the time to hospitalization being 241.50 ± 192.24 days (mean ± SD). Of the 12 drug-treated patients with borderline or less severe PH, 2 was confirmed to have been hospitalized by the data obtained on the cut-off date with the time to hospitalization being 239.002 ± 169.00 days. At the time of survival time analysis, hospital-free survival in the untreated group was 358.87 ± 68.65 days (mean ± SE) (median, 331 days), which was shown to be significantly different from that in the drug-treated group (603.44 ± 50.074) by proportional hazard analysis (hazard ratio [HR] of the drug-treated to untreated groups, 0.10; P = 0.017). d Overall survival. Of the 12 untreated patients with borderline or less severe PH, 7 were confirmed dead (event) by the data obtained on the cut-off date with the time to event being 309.29 ± 195.13 days (mean ± SD); of the drug-treated patients with borderline or less severe PH, 1 was confirmed dead by the data on the cut-off date with the time to event being 671 days. At the time of survival analysis, the time to event in the untreated group was 433.78 ± 66.98 days (mean ± SE), which was significantly different from that in the drug-treated group by proportional hazard analysis (HR of the drug-treated to untreated groups, 0.10; P = 0.0082)\n\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .INCREASE OF O:\nOf the 12 untreated patients with borderline or less severe PH, 5 were confirmed to have required an increase of the O2 dose based on the data obtained on the cut-off date. Of the 12 drug-treated patients with borderline or less severe PH, 3 were confirmed to have required an increase of the O2 dose based on the data obtained on the cutoff date. The risk ratio analysis showed that the hazard ratio of the drug-treated group to the untreated group was 0.58, with the time to O2 dose increase at the time of analysis being 357.71 ± 50.83 days in the untreated group versus 438.20 ± 34.61 days in the drug-treated group, which was not significantly different despite the fact that the results favored the drug-treated group. Only 1 patient with borderline or less severe PH in the drug-treated group achieved a decrease of the O2 dose on day 243 because of improved respiratory function.\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .HOSPITAL-FREE SURVIVAL (TABLE :\nOf the 12 untreated patients with borderline or less severe PH, 8 were confirmed to have been hospitalized based on the data obtained on the cut-off date. In contrast, of the 12 drug-treated patients with borderline or less severe PH, 2 were confirmed to have been hospitalized based on the data obtained on the cutoff date. At the time of survival time analysis, hospital-free survival was 358.87 ± 68.65 days (mean ± SE) (median, 331 days) in the untreated group, which was significantly different from that in the drug-treated group (603.44 ± 50.074 days) as assessed by proportional hazard analysis (hazard ratio of the drug-treated group to the untreated group, 0.19, P = 0.017; log-rank test, P = 0.019; and Wilcoxon test, P = 0.014).\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .OVERALL SURVIVAL (TABLE :\nOf the 12 untreated patients with borderline or less severe PH, 7 were confirmed dead (event) based on the data obtained on the cut-off date. Of the 12 drug-treated patients with borderline or less severe PH, 1 was confirmed dead with the time to event being 671 days. At the time of survival analysis, the time to event was 433.78 ± 66.98 days (mean ± SE) in the untreated group, which was shown to be significantly different from that in the drug-treated group as assessed by proportional hazard analysis (hazard ratio of the drug-treated group to the untreated group, 0.10, P = 0.0082; log-rank test, P = 0.011; and Wilcoxon test, P = 0.011).\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .CLINICAL COURSE:\nEight of the 12 untreated patients received LTOT. Of the 12 patients, 1 completed the 2-year treatment period, 2 were still on the study (with one having completed regular examinations up to month 12 and the other up to month 18), and 1 was not available for the periodic assessments from month 18 onwards due to change of the attending physician after change of address. Of the remaining 8 patients, 7 were censored from hospital-free survival analysis due to progression of respiratory failure, including 5 and 1 who were not available for the periodic assessments other than mMRC, 6MWD and TMET from months 6 and 18 onwards, respectively, and were later confirmed dead. One patient was confirmed alive at the time of analysis but was not available for the periodic assessments other than mMRC, 6MWD and TMET from month 12 onwards. The remaining 1 patient developed ileus and was confirmed to have died due to a disease other than lung disease at another hospital and was completely excluded from the periodic assessments from month 6 onwards. As for the periodic assessments with mMRC, TMET and 6MWT, of the 11 patients assessed by mMRC at month 6, 9 each were further assessed at month 12 and 8 were further assessed at month 18, and 7 completed the assessments at month 24, including 1 patient who was assessed at months 18 and 24 by contacting the patient's current physician after change of address. Of the 11 patients assessed by TMET and 6MWT at month 6, 8 were further assessed at month 12, and 7 completed the assessments at months 18 and 24. Seven of the 12 drug-treated patients included in the analysis received LTOT. Four patients completed the study after finishing the assessments at month 48 and 1 of the remaining 8 patients withdrew from the study before month 6 due to hepatic dysfunction. Another patient withdrew from the study due to lung cancer detected on day 518. The last two patients were censored from hospital-free survival analysis due to exacerbation of respiratory failure on days 641 and 303, respectively, with the former confirmed dead on day 671. The remaining 4 patients were still on the study treatment (with 1 having completed regular examinations at baseline alone, 1 up to month 6, 2 up to month 12, and 1 up to month 18).\n\nBODY.RESULTS.LUNG FUNCTION AND RHC.DRUG-TREATED PATIENTS WITH BORDERLINE OR LESS SEVERE PH:\nCompared with baseline (Table 1b), significant changes were noted in lung function %DLCO at months 6 and 12 (month 6, +7.011, P = 0.010; month 12, +12.18, P = 0.0025) (See Additional file 9: Figure %DLCO). Compared with baseline (Table 2b), there was a decreasing trend in mPAP at months 6 and 12 although no significant difference was noted (month 6, −2.60, P = 0.098, R = 0.84; month 12, −1.71, P = 0.38, R = 0.83). A similar trend was observed for PVR (month 6, −0.69, P = 0.11, R = 0.88; month 12, −0.41, P = 0.41, R = 0.87). Compared with baseline, there was a significant improvement in mixed venous saturation of oxygen at month 6 (+4.78, P = 0.037, R = 0.45), but no significant change was noted from baseline to month 12. Moreover, significant differences were observed in the drug-treated patients with borderline or less severe PH with regard to changes in mPAP, PVR and PVRI from baseline to month 6 (untreated vs. drug-treated: mPAP, +4.71 vs. -2.60 mmHg, P = 0.0035; PVR, +1.60 vs. -0.69 woods, P = 0.0020) (Fig. 3).Fig. 3Results of RHC. a Comparison of changes in mPAP from baseline to month 6 between untreated and drug-treated patients with borderline or less severe PH. b Comparison of change in PVR from baseline to month 6 between untreated and drug-treated patients with borderline or less severe PH. Significant differences were seen between the untreated and drug-treated patients with borderline or less severe PH with regard to changes in mPAP, and PVR from baseline to month 6 (untreated vs. drug-treated: mean difference in mPAP, 4.71 vs. -2.60 mmHg, P = 0.0035; PVR, +1.60 vs. -0.69 woods, P = 0.0020)\n It will be a long time, however, before comparisons of data can be made between the groups for month 12 onwards, with many untreated patients with borderline or less severe PH having been censored from hospital-free survival analysis with an even smaller number of patients available for the periodic assessments. Results for other assessment parameters (See Additional file 10, Additional file 11: Figure ADL, Additional file 12: Figure TTE, and Additional file 13: Figure Arterial blood analysis). Overall, while drug-treated patients with borderline or less severe PH tended to fare better than untreated patients with borderline or less severe PH, it was difficult to draw any conclusion due to the small number of patients currently available for analysis, especially in untreated patients with borderline or less severe PH. Thus, while the study appears to provide potentially valuable findings at this stage, their relevance and/or validity require to be closely examined when the final data of this trial become available.\n\nBODY.DISCUSSION:\nMany patients with IPF experience a rapid elevation of PAP as well as progression of IPF [7] and elevated PAP is shown to be associated with poor prognosis [7]. Therapies currently available for slowing the progression of fibrosis, such as pirfenidone, cannot be expected to improve IPF [30]. Besides, for any honeycomb lung that has become completely organized, no realistic treatment options are available, other than symptomatic relief with LTOT, to neutralize the progression of respiratory failure. In addition, it remains largely unclear how IPF and associated PH may interact to influence each other. Again, in agreement with the ATS/ERS/JRS/ALAT Clinical Practice Guideline 2015 that remains inconclusive with regard to the effect of drugs for PAH on IPH except in a subset of cases, recommending dual ERAs in patients with IPF recommended as \"worthwhile considerations\" and bosentan as \"a conditional recommendation against use\" [31], our study provided no definite clue as to how IPF and associated PH may interact. Furthermore, while our study did not allow PH and lung fibrosis to be examined for any relationship due to its small sample size, it did show no significant correlation between FVC and mPAP, suggesting that how PH and lung fibrosis interact may not readily lend itself to clarification. Against this background, bosentan was shown to be efficacious in a subset of IIP patients in the BULD-1 study, and this is in contrast to the results of a number of randomized controlled trials [10, 32, 33], including the BUILD-3 and ARTEMIS trials, conducted in a wide range of biopsy-proven IPF patients (where the pathology of IPF studied, including the pathologic activity, varied widely, i.e., those with fibrotic IIP, f-NSIP, elevated PAP and decreased CI), which argued against the use of bosentan in patients with PH and f-IIP. Of note, these studies failed to evaluate bosentan in patients perfectly matched for background IPF, suggesting that early intervention with bosentan in those with mildly elevated PAP may not have been sufficiently explored, while the patient background factors may have additively or synergistically contributed to the unfavorable outcomes reported in these studies. With these considerations in mind, we enrolled patients with completely organized honeycomb lung in this study to ensure that all study subjects were as nearly matched for IPF background as possible and that they had pathologically inactive IPF (i.e., they had no active inflammatory lesion, such as GGO) that would require no change of treatment during the study. Since the subjects of this study were required to regularly visit our hospital, patients who presented to our hospital for the first time with symptoms of progressive respiratory failure were enrolled (those without symptoms of progressive respiratory failure were not eligible for treatment at regular intervals at our hospital and were followed up at some other nearby hospital). The present study included IPF patients with completely organized pulmonary fibrosis alone in an attempt to rule out the potential influence of IPF-associated interstitial inflammation and inter-individual differences in IPF status. As a result, the majority of patients enrolled in the study were found to have borderline to less severe PH. While it remains unclear why these patients comprised the majority, in retrospect, ambulatory patient enrollment may have led to those IPF patients who required no emergency or acute intensive care, i.e., those with only honeycomb lesions accompanied by gradually progressive dyspnea, being included in the study, who could therefore represent a selected segment of the entire IPF population. Thus, the present bosentan study was conducted in IPF patients with progressive dyspnea despite minimal IPF pulmonary lesion activity, thus ruling out the additive or synergistic influence of IPF lesions on progression of dyspnea to unequivocally demonstrate the impact of therapeutic intervention for PH in progressive dyspnea and to investigate treatment-associated pathophysiological changes in cardiac function using ECG, as well as the long-term efficacy and safety of early therapeutic intervention in PH with bosentan, with borderline PH defined as mean PAP on effort (mPAPOE) ≥ 30 mmHg (including mPAP at rest <25 mmHg). At its early phase involving a small number of patients, this study demonstrated a greater-than-expected significant difference in prognosis between bosentan-treated and untreated patients with progressive respiratory failure who were confirmed to have completely organized honeycomb lung. Based on this finding, we performed an ad hoc interim analysis, which demonstrated that bosentan therapy led to a clearly better prognosis in patients with honeycomb lung suffering from symptoms of progressive respiratory failure than that in untreated patients, which was very poor. At the same time, study results reconfirmed that patients with progressive respiratory failure with completely organized honeycomb lung have a really poor prognosis. Although the possible imbalance in patient characteristics between the two groups may have affected the study results, this study adhered to randomization with the envelope method, thus making such possibility rather unlikely. The main limitation of this study is the small number of patients included in the analysis, but this interim report was prepared following detection of a significant difference in prognosis in patients with borderline or less severe PH treated with bosentan when only half the number of patients targeted had completed the study. Again, with this study still at an exploratory stage, we have no sufficient information to determine the sample size, but an earlier bosentan repeated-dose study (AC-052-111 trial) of patients with PAH (WHO functional class III or above) conducted in Japan provided the rationale for the sample size required (i.e., 11 patients required to conduct a two-sided t-test for AUC with two-sided significance level of 5% and 90% power). Given the current state of clinical trials in this field, the sample size of this study appears to be never too small. It is highly likely that the favorable prognosis seen in the bosentan-treated patients with borderline or less severe PH might have been affected by inclusion of those with IPF experiencing a rapid elevation of PAP (not always seen among the untreated patients with borderline or less severe PH undergoing the periodic assessments). It is also possible, however, that rapid PAP elevation may have led to many patients being excluded from periodic assessments, while slowly progressive PAP elevation may have allowed patients to undergo the periodic assessments, although it is difficult to prove one way or the other. Despite this limitation, however, our study has raised the following possibilities. First, targeting patients with completely organized honeycomb lung and symptoms of progressive respiratory failure might help select patients with rapid PAP elevation. Second, the use of bosentan may be associated with improved prognosis in selected patients similar to those included in this study. Moreover, although previous reports have failed to demonstrate a significant difference in prognosis in the entire IPF patient population with any of the drugs tested, as therapeutic options capable of suppressing the pathology of IPF become available in the future, the use of bosentan in combination with any such option might contribute further to improvements in prognosis. In addition, while the study data remain yet to mature at present, the available data demonstrate improvements in exercise tolerance over time in bosentan-treated patients compared with untreated patients thus favoring bosentan therapy, although many untreated patients were censored from hospital-free survival analysis and excluded from the periodic assessments. If these effects on exercise tolerance can be replicated through accumulation of data from continuation of this study, bosentan may have a role to play in protecting against declines in exercise tolerance by working at various levels. Moreover, changes in respiratory conditions among the drug-treated patients suggest that bosentan therapy may have corrected abnormal breath patterns that tended toward hyperventilation. These results suggest a potential role for bosentan in delaying the progression of respiratory failure and that a decrease in respiratory rate from relief of dyspnea may be linked to a trend toward increased PO2, decreased PCO2 and increased PH after exercise tolerance and stress testing, which corresponded to improvements seen in breathing efficiency after stress testing, despite no significant change in 6MWD in which the patients were assessed at the pace each patient felt comfortable. Otherwise, while the study data also suggest improvements in RV function on TTE in patients with borderline or less severe PH treated with bosentan, this finding requires to be examined at the completion of the study (see Additional file 10). Again, while the drug-treated group did not require LTOT or increasing the O2 inhalation dose, the untreated group reached the endpoint of hospital survival in a very short time with progressive dyspnea. Another limitation of the study is that, among the exercises leading to increased PAP during RHC, it included clasping and opening both hands repeatedly with a full strain exerted on the body, which allowed PAP to be monitored but made it rather difficult to continue the exercise during CO measurement thus making PVR, the most critical of all parameters, less amenable to measurement. While the reasons for the observed increases in PAP during exercise in the study can only be surmised, they may include changes in PAP or increases in pulmonary blood flow associated with PVR during exercise [34–36] and intrathoracic pressure associated with straining on the part of the patients being evaluated. A still further limitation of the study is that all patients with lower values of PAP (mPAP <25 or mPAPOE <30) were included in the untreated group, while all patients should have been randomized. Health insurance in Japan made it hardly feasible, however, to design this study as a randomized trial in which patients without PH would also be randomized to bosentan or any other drug specific for PAH. Again, given that patients with IPF might show variable outcomes, e.g., rapid declines, frequent exacerbations, or slow declines, thus representing a heterogeneous population, patients may have had to be enrolled only after their lung function has been shown to be stable for many years. However, it was simply unfeasible to enroll patients only after their lung function had been shown to be stable for some time. Instead, we have further refined the definition of eligible patients as those confirmed to have completely organized honeycomb lung and to have no active inflammatory lesion, such as GGO. In addition, the inclusion criteria initially defined the patient age as ranging between 20 and 40 years old to exclude familial pulmonary fibrosis. Given that this could also exclude IPF, however, the patient age may have been better defined as 40 years old or older and may have led to different outcomes. Indeed, a retrospective analysis of all patients enrolled confirmed that they ranged in age between 51 and 80 years old, which has led us to redefine the patient age as 40 years old or older. While all patients confirmed to have no progressive pulmonary fibrosis on CT were given detailed explanations as to the potential adverse effects associated with the use of antifibrotics, such as pirfenidone or nitentanib, which has only recently been launched in Japan and indicated for very few patients, as well as the costs due under current health insurance, none of these patients had previously received any medical treatment for IPF within 3 months prior to their visit and none wished to receive any antifibrotic after the first 3 months or later. Thus, further study is warranted to investigate whether various treatment options, including combination therapy with bosentan and an antifibrotic, may lead to further improvements in prognosis in these patients.\n\nBODY.CONCLUSIONS:\nThis was an interim report of our ongoing long-term study conducted to evaluate the effects of bosentan, as a PAH-specific drug, on IPF-associated PH based on detailed data analysis. Despite its limitations, the study appears to suggest that the bosentan-treated group fared remarkably better than the untreated group, while it was thought likely that those without borderline PH or PH receiving no treatment were associated with poor prognosis, and those with borderline PH or PH receiving bosentan therapy were associated with better prognosis. Again, study findings suggest that there exists a subset of IPF patients who might benefit from bosentan therapy with regard to improvements in IPF and prognosis. The authors plan to prepare a final report after accrual of further patients required to complete the study.\n\nBODY.ADDITIONAL FILES:\n\nAdditional file 1:Supplementary document on subgroup analysis. Supplementary document on patient grouping. (DOCX 12 kb)\nAdditional file 2:Supplementary data on determination of sample size. Data included as a basis for sample size determination in this study. (DOCX 17 kb)\nAdditional file 3:Supplementary document on parameters. Parameters included for evaluation in this study. (DOCX 19 kb)\nAdditional file 4:Figure S1 Methods for and results of Doppler measurements performed in this study. (PPT 154 kb)\nAdditional file 5:Figure S2 TMET (Treadmill exercise test) protocol as part of the supplementary document on parameters. (PPTX 54 kb)\nAdditional file 6:Figure S3 Schedule for evaluation of parameters in this study. (PDF 1240 kb)\nAdditional file 7:Supplementary information on Guidance for Tracleer Tablets® dosage modification. Guidelines for bosentan dose modification as applied in Japan and used in this study. (DOCX 14 kb)\nAdditional file 8:Supplementary data on the criteria for discontinuation of the study in individual patients. Criteria for study discontinuation in individual patients used in this study. (DOCX 16 kb)\nAdditional file 9:Assessment of time-course changes in %DLCO in Drug-treated patients with borderline or less severe PH. A summary of results for %DLCO. (PPTX 117 kb)\nAdditional file 10:Supplementary results for other parameters. A summary of results for other parameters in this study. (DOCX 18 kb)\nAdditional file 11:Figure ADL. Comparison of changes in mMRC between drug-treated and untreated patients with borderline or less severe PH; Comparison of changes in TMET between drug-treated and untreated patients with borderline PH or less severe PH. (PPTX 190 kb)\nAdditional file 12:Figure TTE. Change in PA AcT from baseline to month 12 in drug-treated patients with borderline or less severe PH. (PPTX 61 kb)\nAdditional file 13:Figure Arterial blood analysis. a. Time-course change in PaO2 at rest in drug-treated patients with borderline or less severe PH; b. Change in post 6MWT aortic pH in drug-treated patients with borderline PH or less severe PH. (PPTX 225 kb)\nAdditional file 14:Supplementary data on procedures regarding informed consent. Procedures for informed consent. (DOCX 14 kb)\nAdditional file 15:Supplementary information regarding compensation in case of trial-related injury or death. Compensation or indemnity for injury or death. Scheme for compensation or indemnity for injury or death in this study. (DOCX 14 kb)\nAdditional file 16:Supplementary information regarding medical expenses. Medical expenses anticipated in this study including those to be borne by subjects. (DOCX 14 kb)\n\n**Question:** Compared to no treatment what was the result of bosentan on overall survival of patients with pulmonary hypertension (PH) secondary to idiopathic pulmonary fibrosis (IPF)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
876
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How to avoid discontinuation of antihypertensive treatment. The experience in São Paulo, Brazil\n\n ABSTRACT.OBJECTIVES:\nTo evaluate the importance of providing guidelines to patients via active telephone calls for blood pressure control and for preventing the discontinuation of treatment among hypertensive patients.\n\nABSTRACT.INTRODUCTION:\nMany reasons exist for non-adherence to medical regimens, and one of the strategies employed to improve treatment compliance is the use of active telephone calls.\n\nABSTRACT.METHODS:\nHypertensive patients (n = 354) who could receive telephone calls to remind them of their medical appointments and receive instruction about hypertension were distributed into two groups: a) \"uncomplicated\" – hypertensive patients with no other concurrent diseases and b) \"complicated\" - severe hypertensive patients (mean diastolic ≥110 mmHg with or without medication) or patients with comorbidities. All patients, except those excluded (n = 44), were open-block randomized to follow two treatment regimens (\"traditional\" or \"current\") and to receive or not receive telephone calls (\"phone calls\" and \"no phone calls\" groups, respectively).\n\nABSTRACT.RESULTS:\nSignificantly fewer patients in the \"phone calls\" group discontinued treatment compared to those in the \"no phone calls\" group (4 vs. 30; p<0.0094). There was no difference in the percentage of patients with controlled blood pressure in the \"phone calls\" group and \"no phone calls\" group or in the \"traditional\" and \"current\" groups. The percentage of patients with controlled blood pressure (<140/90 mmHg) was increased at the end of the treatment (74%), reaching 80% in the \"uncomplicated\" group and 67% in the \"complicated\" group (p<0.000001).\n\nABSTRACT.CONCLUSION:\nGuidance to patients via active telephone calls is an efficient strategy for preventing the discontinuation of antihypertensive treatment.\n\nBODY.INTRODUCTION:\nMost patients with hypertension do not benefit from treatment.1 Data from NHANES/NCHS 1999–2004 showed that 71.8% of individuals were aware of their condition, 61.4% were under current treatment and 35.1% had it under control.2 Although the percentage of individuals with controlled blood pressure in Brazil is unknown, we estimate it to be low because the percentage of patients with their high blood pressure under control is around 30-35% at outpatient clinics that specialize in hypertension, and these clinics do not represent the national reality.3 Many reasons exist for non-adherence to medical regimens, including adverse drug effects, poor instructions, poor provider-patient relationship, poor memory, a patient's disagreement with the need for treatment or inability to afford medication.4,5 One of the strategies employed to improve treatment compliance is the use of active telephone calls6 to patients through which they are given guidance and treatment questions are answered. Using this strategy, several studies have shown an increase in treatment compliance, a higher percentage of blood pressure control6-8 and a decrease in mortality.9-11 Another strategy that aims at improving treatment compliance is the employment of two low-dose medications. Evidence suggests that the combination of two antihypertensive agents provides a higher percentage of blood pressure control due to complementary mechanisms of action. In addition, this results in better tolerability and consequently, better treatment compliance.12-14 The ASCOT study15 (Anglo-Scandinavian Cardiac Outcomes Trial) has indicated a significant reduction in all causes of mortality with the use of current medications for treating hypertension: calcium channel antagonists and angiotensin-converting enzyme inhibitors. It has also been demonstrated that a high incidence of adverse events may decrease treatment compliance. However, in the LIFE16 study, angiotensin receptor antagonists proved to be better at reducing the risks related to cardiovascular events and they resulted in a lower rate of treatment discontinuation when compared to a beta-blocker. In view of these data, we evaluated the importance of providing guidelines to patients via active telephone calls for blood pressure management and preventing treatment discontinuation in hypertensive patients. We used two treatment regimens with low-dose medications that were offered for free to avoid the influence of financial factors. We opted for one regimen called \"traditional\" based on diuretics and beta-blockers and another called \"current\" based on angiotensin II antagonists and calcium channel blockers.\n\nBODY.METHODS:\nThe study participants were selected at the Hypertension Unit, University of São Paulo General Hospital, Nephrology Division, University of São Paulo School of Medicine. The patients had essential hypertension and were able to receive telephone calls to be reminded of their medical appointments and be given guidance about hypertension. Patients were of both genders, from diverse ethnic backgrounds, over 18 years of age, and had body mass indices below 40 kg/m2. The patients were enrolled in the study after signing a free and informed consent form. The study was approved by the Ethics Committee of the Board of Directors of the University of São Paulo School of Medicine. Patients were excluded on several bases, including blood pressure <140/90 mmHg without antihypertensive medication, secondary hypertension, white-coat hypertension with systolic pressures ≥140 mmHg and/or diastolic pressures ≥90 mmHg at the doctor's office and awake mean systolic pressures <135 mmHg or awake mean diastolic pressures <85 mmHg without antihypertensive medication, malignant hypertension and patients with a previous history of hypersensitivity reaction to the study medications. Other exclusion criteria were as follows: pregnant women or nursing mothers, the presence of liver dysfunction evidenced by the patient's clinical history or by one of the liver function tests (levels twice the normal values for alkaline phosphatase, total bilirubin, aspartate aminotransferase), patients with clinical conditions that might interfere with the total conformity with the study or those who might have an increased risk for participating in the study, patients with a history of alcoholism, drug abuse or mental disorders that might invalidate the free and informed consent or limit the patient's ability to meet the protocol rules and patients who had participated in any other studies involving investigational drugs or drugs already marketed within the previous month before enrollment in this study or concomitantly with this study.\n\nBODY.METHODS.BLOOD PRESSURE MEASUREMENT:\nBlood pressure and heart rate measurements were performed on the right upper limb on sitting patients five times by the nursing staff using an appropriately sized cuff with a validated automatic oscillometric device (Dixtal, DX2710, São Paulo, Brazil).17 The mean of the last two measurements was calculated and recorded if the difference between these measurements was less than 4 mmHg. If after the five measurements the difference between the last two was greater than 4 mmHg, the measurement was repeated until the difference between the two measurements was less than 4 mmHg. During the study, blood pressure measurements were always performed in the afternoon by the same nursing staff using the same device. The diagnosis of hypertension was made when the mean values of the last two measurements were as follows: systolic pressure ≥140 mmHg and/or diastolic pressure ≥90 mmHg with or without medication at the initial visit (Visit 0). Patients who were receiving antihypertensive medication at the initial visit and had a systolic pressure <140 mmHg or a diastolic pressure <90 mmHg were re-evaluated eight weeks after discontinuation of their medication and introduction of placebo; these patients were included in the study when the mean values included a systolic pressure ≥140 mmHg and/or a diastolic pressure ≥90 mmHg. Controlled blood pressure was defined in two levels as (with the patient in the seated position) a systolic/diastolic pressure less than 140/90 or 120/80 mmHg. All of the patients underwent Ambulatory Blood Pressure Monitoring (ABPM), which was performed with a validated oscillometric device (SpaceLabs 90207, SpaceLabs Inc, Richmond, WA, USA),18,19 five weeks after the start of the placebo to eliminate cases of patients with white-coat hypertension and, in patients who did not receive placebo, to identify the white-coat effect. According to clinical characteristics, the patients were assigned to two groups: a) \"uncomplicated\"– hypertensive patients without complications and without other concurrent diseases and b) \"complicated\"- patients with severe hypertension (mean diastolic pressure ≥110 mmHg with or without medication) or comorbidities such as diabetes mellitus, renal failure (serum creatinine >1.4 mg/dL), coronary insufficiency, congestive heart failure or prior history of cerebrovascular accident.\n\nBODY.METHODS.PATIENT RANDOMIZATION:\nAll patients, from both the complicated and the uncomplicated groups, were open-block randomized to receive active telephone calls (\"phone calls\" group) or not to receive telephone calls (\"no phone calls\" group) and to follow two treatment regimens, \"traditional\" or \"current\". Thus, after the first visit and randomization, patients from the \"phone calls\" group were invited to enroll by telephone in a program called \"Biosintética Assistance\", supported by Biosintética Laboratory. The patients who subscribed started receiving active telephone calls from appropriately trained operators and also started receiving magazines with health-related information, which were sent periodically by mail. There were six contacts by telephone during the study. During phone calls, the patient was reminded to attend the next visit, and he/she was educated about hypertension and any necessary clarifications about his/her treatment. All patients randomized to the \"phone calls\" group were invited to attend occasional informative lectures with the participation of a multidisciplinary team. At the initial stage of treatment (following eight weeks of treatment with placebo), the \"uncomplicated\" group received one of the following treatment regimens: a) \"traditional\" treatment with 6.25 mg 2x/day hydrochlorothiazide and 25 mg 2x/day atenolol; b) \"current\" treatment with 25 mg 2x/day losartan and 2.5 mg 2x/day amlodipine. If the blood pressure could not be controlled during the visits, the medication doses were doubled or another antihypertensive was added. The \"complicated\" group did not undergo the treatment period with placebo and was randomized to receive either \"traditional\" or \"current\" drug regimens similar to the ones administered to the \"uncomplicated\" group, though the specifics of each condition were carefully considered. The addition of other antihypertensive agents in the \"uncomplicated\" group and the specificities of the patient regimens in the \"complicated\" group were performed according to the guidelines from the V Brazilian Guidelines on Arterial Hypertension.3 All patients were instructed to take their medication every day at 7:00 AM and 7:00 PM, with a variation of up to one hour. A 12-month supply of the necessary medication was supplied to the patients by the physician at the end of the visit at no cost to eliminate the financial factor in this analysis; patients were given enough medication to last between visits. Patients were instructed to bring the remaining pills to their subsequent visits, at which time they were counted by the nursing staff without the patients' knowledge of this procedure. Doctors' visits, preceded by the nursing staff visit, took place every eight weeks for 56 weeks and included measurements of blood pressure, heart rate and weight. The weight was checked with the patient wearing light clothes standing barefoot on a scale (model 2096PP, Toledo do Brasil, São Paulo, Brazil). Study withdrawal was characterized by non-attendance to appointments for up to three months after the scheduled date. Patients who returned to the medical clinic within three months after the scheduled date were allowed to continue in the study and be evaluated in an unscheduled visit. The tests performed during the placebo treatment and after 40 weeks of active treatments included the following: fasting glucose, urea, creatinine, total cholesterol, fractions of cholesterol, triglycerides, uric acid, total bilirubin, creatine phosphokinase (CPK), sodium (Na+), potassium (K+), hemoglobin, thyroid-stimulating hormone (TSH), alkaline phosphatase, aspartate aminotransferase (AST), alanine aminotranferase (ALT) and urinary excretion of sodium in 24 h.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData were analyzed using an analysis of variance (ANOVA) according to the factor of controlled or uncontrolled hypertension. P<0.05 was considered statistically significant.\n\nBODY.RESULTS:\nThe study included 398 patients. Of those, 44 were excluded for the following reasons: a) lack of enrollment in the Biosintética Assistance program (n = 17); b) body mass index >40 kg/m2 (n = 6); c) secondary hypertension (n = 4); d) white-coat hypertension (n = 3); e) white-coat normotension (\"masked hypertension\") (n = 3); f) alcohol use (n = 2); g) classification error (n = 4); h) refusal to do ABPM (n = 2); i) loss of blood pressure measurements from the first visit (n = 1); j) pregnancy (n = 1); k) death (n = 1). A total of 354 patients were evaluated. There were no statistically significant differences in age, gender, skin color, body mass index, blood pressure or heart rate between the groups classified as: \"complicated\" (n = 175) and \"uncomplicated\" (n = 179); \"traditional\" (n = 176) and \"current\" (n = 178); \"phone calls\" (n = 108) and \"no phone calls\" (n = 246) (Table 1). In addition, no differences were seen in marital status, education or occupation.\n\nBODY.RESULTS.A) CONTROL OF BLOOD PRESSURE:\nA marked reduction in blood pressure was seen in patients from both the \"complicated\" and \"uncomplicated\" groups (Table 2). The blood pressure was reduced by 19±10/20±14 mmHg in the \"uncomplicated\" group and by 18±14/22±17 mmHg (systolic/diastolic) in the \"complicated\" group (p>0.05; n = 354). The \"phone calls\" and \"no phone calls\" groups showed significantly lower blood pressures at the end of treatment (p< 0.00001). The \"traditional\" and \"current\" groups showed significantly lower blood pressures at the end of treatment (Figure 1). The percentage of patients with controlled blood pressure (<140/90 mmHg) was also high at the end of treatment (74%). On the other hand, the percentage of patients with blood pressure reduced to <120/80 mmHg was only 29%. There was no difference in the percentage of patients with controlled blood pressure between the \"phone calls\" and \"no phone calls\" groups or in the reduction of blood pressure and the percentage of patients with controlled blood pressure among the \"traditional\" and \"current\" treatments. However, patients in the \"phone calls\" group (80%) had better blood pressure control than those in the \"no phone calls\" group (71%), though the difference was not statistically significant. The \"uncomplicated\" and \"complicated\" groups had statistically significant differences in the percentage of patients with controlled blood pressure (<140/90 mmHg) (80% vs. 67%, respectively; p<0.000001). At the next to last study visit (visit 7), 90% and 66% of patients had blood pressure measurements <140/90 mmHg in the \"uncomplicated\" and \"complicated\" groups, respectively. However, there was no difference in the percentage of patients with blood pressure <120/80 mmHg (31% on the final visit in both groups, Table 3). Among patients with a blood pressure <140/90 mmHg at the final visit, only 3% had received only one type of antihypertensive medication; most patients (56%) received 2 (34%) or 3 (22%) types of antihypertensive medication (Table 4).\n\nBODY.RESULTS.B) TREATMENT DISCONTINUATION:\nA significantly lower number of patients in the \"phone calls\" group quit the treatment compared to the \"no phone calls\" group (4 vs. 30, respectively; Figure 2). There was no difference in the percentage discontinuation in the \"complicated\" and \"uncomplicated\" groups or in the \"current\" and \"traditional\" treatment regimens (p>0.05).\n\nBODY.RESULTS.C) TABLET COUNT:\nPatients in the groups: \"complicated\" and \"uncomplicated\"; \"traditional\" and \"current\"; \"phone calls\" and \"no phone calls\" did not show statistically significant differences in medication intake, as verified by counting the tablets returned at the visits. Compliance was over 85% at all visits. However, patients in the current + uncomplicated + phone call group had the highest rate of compliance (93%). The lowest rate of compliance occurred in the traditional + uncomplicated + no phone call group (85%) (p>0.05).\n\nBODY.RESULTS.D) BODY WEIGHT:\nBlood pressure reductions occurred in the presence of increased body weight (baseline visit: 73±14 kg; final visit: 74±14 kg, p = 0.0008, Table 5).\n\nBODY.RESULTS.E) ADVERSE EVENTS AND LABORATORY TESTS:\nMost of the adverse events that occurred during the study were of light or moderate intensity. Patients in the \"traditional\" group showed significantly more depression symptoms (5 vs. 0) and coughing (30 vs. 4) when compared with patients in the \"current\" treatment group. On the other hand, patients in the \"current\" group had a greater number of complaints of dizziness when compared to patients in the \"traditional\" treatment group (63 vs. 42, respectively). Compared to patients in the \"uncomplicated\" group, patients in the \"complicated\" group presented a larger number the following symptoms: abdominal pain (19 vs. 8), sleepiness (32 vs. 10), cough (23 vs. 11), weakness (23 vs. 11) and blurred vision (6 vs. 0). Conversely, compared to patients in the \"complicated\" group, patients in the \"uncomplicated\" group presented a larger number of complaints such as pain (74 vs. 53) and headache (80 vs. 59). Patients in the \"phone calls\" group reported a 34% rate of symptoms, while a 66% rate of symptoms was reported by patients in the \"no phone calls\" group. The most frequently (>10%) found adverse events were as follows: headache (62.1%), unspecific pain (58.2%), dizziness (45.8%), edema (40.1%), fatigue (15.3%), sleepiness (14.4%), cough (13.3%), precordial pain (13.3%), weakness (11.9%), tachycardia (11.3%), insomnia (10.5%) and paresthesia (10.5%). There were 26 (8%) severe adverse events: death (5), unstable angina (4), acute myocardial infarction (1), transient ischemic attack (1), hypertensive encephalopathy (1), breast cancer (3), bronchospastic crisis (1), cholecystopathy (1), diabetic decompensation (1), syncope (1), nephrectomy (2), trauma (2), peritoneal bypass (1), diarrhea (1) and gastrectomy (1). The results of the laboratory tests performed in the beginning and end of the study are shown in Table 6. There was an increase in the 24-hour urinary sodium excretion from the beginning to the end of the study (120±46 vs. 129±45 mEq/L, respectively; p = 0.000036)\n\nBODY.DISCUSSION:\nAdherence to a medication regimen is generally defined as the extent to which patients take medications as prescribed by their health care providers.20 This can be classified into three aspects of the individual's behavior regarding his/her health: 1) take the medication correctly, 2) follow the professionals' instructions related to diets and lifestyle changes and 3) attend medical visits.21 The discontinuation of medication is considered the most serious type of noncompliance with treatment. In our study, the \"phone calls\" group showed a significantly lower percentage of patients who discontinued treatment compared with the \"no phone calls\" group. A meta-analysis of randomized studies22 suggested that treatment discontinuation is effectively avoided by contacting the patient using letters, telephone calls or e-mails. Márques Contreras et al.6 evaluated the efficacy of telephone and e-mail intervention for therapeutic compliance among 538 patients with mild to moderate hypertension and verified that the group that received phone calls showed higher compliance rates (96.2%) than the group that was only contacted by e-mail (91.3%) or the group that did not receive any intervention (69.2%). Similar data were found in a controlled study recently conducted by Bosworth et al.7 in which the self-reported medication adherence was shown to increase by 9% in the group that had received behavioral and educational intervention (319 hypertensive subjects) through telephone calls versus 1% in the group that had not received intervention (n = 317). Friedman et al.23, through telephone calls associated with usual medical care for six months, compared hypertensive patients who received their usual medical treatment with those who used a monitoring and counseling system. Participants with compliance below 80% of the prescribed medication prior to enrollment in the study showed better compliance with the treatment when the telephone monitoring system was used than did patients who did not use the system (36% vs. 26%, respectively). On the other hand, participants with a compliance level above 80% prior to enrollment in the study did not show any change in compliance, and compliance was comparable in users and non-users of the telephone monitoring system. In our study, regardless of the group to which they were randomized, patients showed compliance rates above 85% at all visits. However, we do not have data regarding medication compliance before enrollment in the study. It is important to point out that treatment compliance is directly related to mortality. A study conducted by Wu et al.9 randomized 442 non-compliant patients who took five or more medications for chronic illnesses to either receive or not receive telephone counseling. The objective was to investigate the influence of this advice for treatment compliance and patient mortality. The study verified that telephone counseling was associated with a 41% reduction of death risk after two years. These authors used scores to classify the levels of compliance of 1011 patients, and they showed that the lower the level of compliance, the higher the risk of mortality in the long term. The influence of the initial selection of antihypertensive medication in regards to compliance with treatment was evaluated by Monane et al.24 The authors verified that the use of newer antihypertensive agents, such as angiotensin converting enzyme inhibitors and calcium channel antagonists, was associated with a compliance ≥80% when compared with thiazide diuretics in patients with cardiac morbidities and multiple visits to physicians. This may have been the result of better tolerability to newer classes of antihypertensive agents. However, in our study, there was no difference in tolerability between the \"traditional\" and \"current\" groups, which could explain the similar compliance rates in both groups. On the other hand, just like in Monane et al.24 we found that fewer patients in the \"complicated\" group had their blood pressure under control in the final visit (67%), whereas in the next to last study visit, 90% of patients in the \"uncomplicated\" group and only 66% in the \"complicated\" group had blood pressure levels below 140/90 mmHg. This may be due to comorbidities and the use of other concurrent treatments. The groups: \"complicated\" and \"uncomplicated\"; \"traditional\" and \"current\"; \"phone calls\" and \"no phone calls\"showed significant reductions in blood pressure from the randomization visit to the final visit, as a high percentage of patients with good blood pressure control (BP <140/90 mmHg) in the total group (74%), regardless of an increase in body weight throughout the study. In our study, the high percentage patients with controlled blood pressure at the end of treatment can be explained by the simple drug regimen, the simple drug acquisition provided by the physician at no cost at the end of each visit, a fixed team of physicians and nurses throughout the study period and the easy access to team members in cases of unexpected occurrences by means of unscheduled visits. We observed that in the next-to-last study visit, the \"uncomplicated\", \"current\", \"traditional\", \"phone calls\" and \"no phone calls\" groups had more patients with controlled blood pressure when compared to the last visit. It is possible that during the last visit the patients were worried about continuing the treatment and receiving the medications. Therefore, guidance provided to patients by means of active telephone calls, brochures and group workshops with healthcare professionals is an efficient strategy for reducing treatment discontinuation, the most severe type of noncompliance with treatment.\n\nBODY.DISCUSSION.PERSPECTIVES:\nIn an attempt to reduce non-adherence to antihypertensive treatment determinants in a developing country, we demonstrated that better blood pressure control was obtained through doctor's visits every two months with the same physician preceded by nursing staff visits with patients receiving all medication necessary for their treatment; this benefit was independent of comorbidities and the type of treatment used. Guidance provided to the patients through active telephone calls, brochures, group workshops with healthcare professionals and donation of all the required medications significantly reduced the treatment discontinuation rates.\n\nBODY.DISCUSSION.SOURCES OF FUNDING:\nThis study received support from \"Biosintetica Assistance\" and Biosintética Laboratory, which donated all of the required medication.\n\n**Question:** Compared to \"no phone calls\" group what was the result of phone calls\" group on discontinued treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
878
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How quick is soon? Early response to continuous positive airway pressure: a randomized controlled trial\n\n ABSTRACT:\n\n\nBODY.INTRODUCTION:\nNumerous studies have confirmed that using non-invasive continuous positive airway pressure (nCPAP) for chronic obstructive pulmonary disease and congestive heart failure improves the respiratory rate, heart rate (HR), and work of breathing. We hypothesize that early application of nCPAP with concomitant medical therapy to patients with acute undifferentiated shortness of breath (SOB) will improve objective measures of respiratory distress. Specifically, early application of nCPAP can improve the tidal volume (TV), end-tidal carbon dioxide (EtCO2) and Rapid Shallow Breathing Index (RSBI), and reduce intubations over the standard of treatment alone in 15 minutes or less.\n\nBODY.METHODS:\nFifty-two patients were randomized equally to either CPAP + standard of care (nCPAP group) or to standard of care (standard group) for acute undifferentiated SOB. nCPAP was applied for 15 minutes. Subject enrollment was randomized and demographic data were recorded upon enrollment. Volumetric measures were obtained by breathing through the EtCO2/Flow sensor for 30 seconds at 5-minute intervals along with vital signs. Inclusion criteria were adults >18 years, with acute respiratory distress, admitted to the resuscitation room of the emergency department, respiratory rate > 25 bpm, SpO2 > 75%, Glasgow Coma Score > 8, HR > 60/min, and systolic blood pressure > 90 mmHg. Exclusion criteria were respiratory arrest and/or cardiac arrest, suspected pulmonary embolism, pneumothorax, myocardial infarction, temperature >38.5°C, or refusal to participate.\n\nBODY.RESULTS:\nAll tests were two-sided and assessed at the 0.05 type-I error rate. The gender distribution was equal for both groups. There was no difference in baseline characteristics except for age, HR and diastolic blood pressure (P < 0.05). Subjects in the nCPAP group had a greater improvement for various parameters compared with the standard group including TV (0.8 l, 0.3 l), EtCO2 (30 mmHg, 38 mmHg) and RSBI (39, 150), respectively. The nCPAP group also had a shorter hospital and ICU length of stay compared with the standard group (4 vs 5 days, and 2 vs 3 days, respectively). Finally, the rate of intubations was higher in the standard group (n = 8, n = 3) than the nCPAP group (P < 0.01).\n\nBODY.CONCLUSION:\nThe early application of nCPAP in patients with acute undifferentiated SOB improves their volumetric parameters and vital signs in as early as 5 minutes. This pilot study provides objective support for the notion that early application of nCPAP can lead to measurable improvement in TV, EtCO2, RSBI and reductions in intubations.\n\n**Question:** Compared to standard group what was the result of CPAP + standard of care (nCPAP group) on rate of intubations?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
888
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effects of Vitamin D Supplementation on Signaling Pathway of Inflammation and Oxidative Stress in Diabetic Hemodialysis: A Randomized, Double-Blind, Placebo-Controlled Trial\n\n ABSTRACT:\nObjective: This study was carried out to determine the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic hemodialysis (HD) patients. Methods: This randomized double-blind placebo-controlled clinical trial was conducted among 60 diabetic HD patients. Subjects were randomly allocated into two groups to intake either vitamin D supplements at a dosage of 50,000 IU (n = 30) or placebo (n = 30) every 2 weeks for 12 weeks. Gene expression of inflammatory cytokines and biomarkers of oxidative stress were assessed in peripheral blood mononuclear cells (PBMCs) of diabetic HD patients with RT-PCR method. Results: Results of RT-PCR indicated that after the 12-week intervention, compared to the placebo, vitamin D supplementation downregulated gene expression of interleukin (IL)-1β (P = 0.02), tumor necrosis factor alpha (TNF-α) (P = 0.02) and interferon gamma (IFN-γ) (P = 0.03) in PBMCs of diabetic HD patients. Additionally, vitamin D supplementation, compared to the placebo, downregulated gene expression of transforming growth factor beta (TGF-β) (P = 0.04), protein kinase C (PKC) (P = 0.001), and mitogen-activated protein kinases 1 (MAPK1) (P = 0.02) in PBMCs of diabetic HD patients. Although not significant, vitamin D supplementation let to a reduction of nuclear factor kappa B (NF-kB) (p = 0.75) expression in PBMCs isolated from diabetic patients compared to the placebo group. There was no statistically significant change following supplementation with vitamin D on gene expression of interleukin (IL)-4, IL-6, and vascular endothelial growth factor (VEGF) in PBMCs of diabetic HD patients. Conclusions: Overall, we found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on few gene expression related to inflammation and oxidative stress. Clinical trial registration: IRCT201701035623N101. Registered on January 8, 2017.\n\nBODY.INTRODUCTION:\nIncreased inflammatory cytokines as a necessary part of chronic kidney disease (CKD) has been identified and associated with cardiovascular disease (CVD), protein-energy malnutrition, all-cause mortality, and decreased glomerular filtration rate (GFR) (Stenvinkel et al., 1999; Akchurin and Kaskel, 2015). Increased biomarkers of inflammation, such as interleukin-1beta (IL-1β) and tumor necrosis factor alpha (TNF-α) were negatively linked to the measures of kidney action and directly with albuminuria (Gupta et al., 2012). Furthermore, various factors, including increased oxygen metabolism, uremic toxicity, increased inflammatory factors, lack of antioxidant vitamins and microelements, and dialysis procedure in patients with CKD would result in oxidative stress (Stepniewska et al., 2015). Prior studies have documented that hypovitaminosis D is very prevalent in individuals with CKD (de Boer, 2008; Cupisti et al., 2015). In a study by Bhan et al. (2010) hypovitaminosis D was reported 50–90% in dialysis subjects. On the other hand, beneficial effects of vitamin D on biochemical variables have reported in hemodialysis (HD) patients (Ibrahim et al., 2015), subjects with type 2 diabetes mellitus (T2DM) (Calvo-Romero and Ramiro-Lozano, 2016) and polycystic ovary syndrome (Maktabi et al., 2017). Few studies have evaluated the effects of vitamin D on gene expression related to inflammation and oxidative stress. Choi et al. (2013) shown significant reductions in gene expression of interleukin 6 (IL-6) and tumor necrosis factor alpha (TNF-α) in vitamin D-treated rats. In addition, in vitro studies have shown that the release of TNF-α can be inhibited by calcitriol in a dose-dependent fashion (Müller et al., 1992). Another study indicated that vitamin D through vitamin D receptor (VDR) suppressed TNF-α-induced nuclear factor kappa B (NF-κB) activation and IL-6 up-regulation in epithelial cell isolated from transgenic mice and colitis animal model (Liu et al., 2013). However, another study supported no significant changes in TNF-α and IL-6 levels in high-dose vitamin D administration in healthy overweight individuals following a 12-week progressive resistance exercise training program (Carrillo et al., 2012). The molecular mechanism of vitamin D in modulating inflammatory cytokines and oxidative stress are still unclear. Mitogen-activated protein kinases (MAPKs) participate in signaling mechanisms in responses to pro-inflammatory factors as well as NF-κB are thought to play an important function in the regulation of pro-inflammatory cytokines in cellular responses (Baldassare et al., 1999). To our knowledge, data on the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic HD patients are limited and controversial. The purpose of this study was, therefore, conducted to evaluate the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic HD patients.\n\nBODY.METHODS.TRIAL DESIGN AND PARTICIPANTS:\nThis randomized double-blind placebo-controlled clinical trial, registered in the Iranian website for registration of clinical trials as http://www.irct.ir:IRCT201701035623N101, was carried out among 60 diabetic HD subjects aged 18-80 years who were referred to the Akhavan Clinic in Kashan, Iran, from January 2017 to April 2017. This research was carried out in accordance with the Declaration of Helsinki and the Research Ethics Committee of Kashan University of Medical Sciences (KAUMS) approved the study protocol, and informed consent was taken from all subjects. The main exclusion criteria from the study were as follows: taking vitamin D, antioxidant and/or anti-inflammatory supplements such as vitamins E and C, omega-3 fatty acids, and taking immunosuppressive medications within 3 months prior to the enrollment in the study.\n\nBODY.METHODS.STUDY DESIGN:\nAt first, all patients were randomized into two groups to receive either 50,000 IU of vitamin D or placebo (n = 30 in each group) every 2 weeks for 12 weeks. Vitamin D supplements and placebos were produced by Zahravi Pharmaceutical Company, Tabriz, Iran, and Barij Essence Pharmaceutical Company, Kashan, Iran, respectively. In the current study, we used the above-mentioned dose of vitamin D based on the tolerable upper intake level (4,000 IU/day) of vitamin D (Ross et al., 2011), as well as dosage recommended in previous studies among HD patients (Armas et al., 2013; Mose et al., 2014). Compliance to the vitamin D consumption was evaluated through determination of serum 25 (OH) vitamin D values and asking subjects to return the medication containers. Both dietary 3-day food records [analyzed by nutritionist IV software (First Databank, San Bruno, CA)] and physical activity records were taken at baseline, weeks 3, 6, 9, and 12 of the intervention.\n\nBODY.METHODS.ASSESSMENT OF ANTHROPOMETRIC MEASURES:\nBody weight and height were quantified in a fasting status using a digital scale (Seca, Hamburg, Germany) at baseline and after the 12-week intervention. Body mass index (BMI) was calculated by weight and height measurements [weight (kg)/height (m2)].\n\nBODY.METHODS.ASSESSMENT OF OUTCOMES:\nPrimary outcomes were inflammatory cytokines expression and secondary outcomes were biomarkers of oxidative stress expression. We selected the following inflammatory cytokines and proteins [IL-1β, IL-4, IL-6,TNF-α, interferon gamma (IFN-γ), transforming growth factor beta (TGF-β), protein kinase C (PKC), MAPK1, vascular endothelial growth factor (VEGF) and NF-κB] as they play a very important role in signaling pathways of oxidative stress (Geraldes and King, 2010; Baker et al., 2011; Tanti et al., 2012; Chen et al., 2015).\n\nBODY.METHODS.ISOLATION OF PERIPHERAL BLOOD MONONUCLEAR CELLS (PBMCS):\nAt baseline and endpoint of the intervention, 15 mL samples of fasting blood were taken at reference laboratory of KAUMS, Kashan, Iran. PBMCs were isolated from blood by centrifugation through a Ficoll-Hypaque (Denholm and Wolber, 1991) followed by Percoll. In summary, 10 mL of blood with an equal volume of phosphate-buffered saline (PBS) was mixed (Dagur and McCoy, 2015). Layer diluted blood over Ficoll-Hypaque solution, using 3 parts diluted blood to 2 parts Ficoll-Hypaque and centrifuged 30 min at 500 × g, 25°C. Then, using a sterile Pasteur pipet, carefully collected PBMCs, and located at the interface between the plasma (upper layer) and Ficoll-Hypaque solution (Dagur and McCoy, 2015). Cells to a 50 mL centrifuge tube were transferred, then after adding 10 mL PBS, were centrifuged 10 min at 400 × g, 4°C. The supernatant was discarded and was resuspended cells in 4 mL PBS. Following previous step, 1.65 mL of 10× Hanks balanced salt solution with 10 mL Percoll was mixed. Eight mL Percoll solution with 4 mL mononuclear cells in a silanized 10 × 1.5 cm round-bottom polypropylene tube was mixed (Dagur and McCoy, 2015). Mixing thoroughly by inverting tube three or four times was done. Then, 25 min at 370 × g, 25°C was centrifuged. PBMCs were aspirated by gentle pipetting into a clean test tube. After centrifugation, monocytes appear as a cloudy layer in the top 5 mm of the gradient (Dagur and McCoy, 2015).\n\nBODY.METHODS.RNA EXTRACTION AND REAL-TIME PCR:\nTo RNA extraction, we used the RNX-plus kit (Cinnacolon, Tehran, Iran). RNA were extracted just after lymphocyte extraction from fresh 5 mL peripheral blood and less than 1 million cells was used for RNA extraction with the trizol reagent (Invitrogen, USA). Following extraction of the total RNAs from each sample, RNA quantification were performed by UV spectrophotometer. Each samples OD 260/280 ratio between 1.7 and 2.1 was intended showing no contamination with both protein and DNA (Dunkley et al., 2008). The isolated RNA was reverse transcribed to cDNA library using moloney murine leukemia virus reverse transcriptase. Gene expression of IL-1β, IL-4, IL-6,TNF-α, IFN-γ, TGF-β, PKC, MAPK1, VEGF, and NF-κB were evaluated by quantitative RT-PCR, using the LightCycler technology (Roche Diagnostics, Rotkreuz, Switzerland) with SYBR green detection and Amplicon Kit (Table 1). Glyceraldehyde-3-phosphate dehydrogenase primers were used as housekeeping gene. To design primers, Primer Express Software (Applied Biosystems, Foster City) and Beacon designer software (Takaposizt, Tehran, Iran) were used. Relative transcription levels were calculated by the method of Pffafi or 2−ΔΔCT. In the current study, we quantified gene expression levels related to inflammation and oxidative stress in PBMCs from diabetic HD patients. PBMCs from venous blood samples are the most available tissue for analysis of gene expression (Mizuarai et al., 2010). In addition, gene expression levels related to inflammation and oxidative stress in PBMCs are more accurate and may provide more valuable information than plasma concentrations (Mizuarai et al., 2010). Few studies have previously assessed gene expression levels of inflammation and oxidative stress in PBMCs from diabetic patients (Xavier et al., 2015; Mazloom et al., 2016). Table 1 Specific primers used for real-time quantitative PCR. Gene Primer Product size (bp) Annealing temperature (C) GAPDH F: GAGTCAACGGATTTGGTCGT 223 58 R: TTGATTTTGGAGGGATCTCG IL1-β F: GATGGCTTATTACAGTGGCAATG 137 59 R: AGTGGTGGTCGGAGATTCG IL-4 F: CTCACAGAGCAGAAGAACAC 221 60 R: TGGTTGGCTTCCTTCACAG IL-6 F: GTAGTGAGGAACAAGCCAGAG 286 60 R: TGACCAGAAGAAGGAATGCC TNF-α F: GAGCCAGCTCCCTCTATTTATG 187 60 R: CTACATGGGAACAGCCTATTGT IFN-γ F: GGTTCTCTTGGCTGTTACTG 250 59.5 R: TGTCTTCCTTGATGGTCTCC VEGF F: TGCAGATTATGCGGATCAAACC 150 58.5 R:TGCATTCACATTTGTTGTGCTGTAG NFK-β F: GCTGAGTCCTGCTCCTTC 202 58 R: GTCTATTTGCTGCCTTGTGG TGF-β F: ACTACTACGCCAAGGAGGTC 250 59 R: CGGTTGCTGAGGTATCGC PKC F: CGTCCTCATTGTCCTCGTAAG 249 59 R: TCATTCCTGCTGGTCAAATCC MAPK1 F: GGAACAGCACCTCCACTATTT 226 60 R: GCCACAATGTCTGCGTATCT GAPDH, glyceraldehyde-3-Phosphate dehydrogenase; IL-1β, interleukin-1β; IFN-γ, interferon gamma; MAPK1, mitogen-activated protein kinases 1; NF-κB, nuclear factor kappa B; PBMCs, peripheral blood mononuclear cells; PKC, protein kinase C; TNF-α, tumor necrosis factor alpha; TGF-β, transforming growth factor beta; VEGF, vascular endothelial growth factor . \n\nBODY.METHODS.RANDOMIZATION:\nRandomization assignment was carried out using computer-generated random numbers. Randomization and allocation concealment were conducted by the researchers and participants and were carried out by a trained staff at the clinic.\n\nBODY.METHODS.STATISTICAL METHODS:\nThe Kolmogorov-Smirnov test was applied to control the normal distribution of variables. Independent sample t-test was used to establish changes in anthropometric measures and dietary intakes between the two groups. To determine the effects of vitamin D administration on inflammation and oxidative stress expression, we used independent sample t-test. P < 0.05 were considered statistically significant. All statistical analyses conducted using the Statistical Package for Social Science version 18 (SPSS Inc., Chicago, Illinois, USA).\n\nBODY.RESULTS:\nSixty diabetic HD patients [vitamin D and placebo (n = 30 each group)] completed the trial (Figure 1). On average, the rate of compliance in our study was high, such that more than 100% of capsules were taken throughout the study in both groups. No side effects were reported following supplementation of vitamin D in diabetic HD patients throughout the study. Potential side effects of vitamin D supplementation monitored by evaluating the serum levels of calcium every month, as well as, monitoring clinical symptoms among participants including, anorexia, nausea, and vomiting. Figure 1Summary of patient flow diagram. Distribution of gender, participants' mean age, height, baseline weight and BMI as well as their change, and years of dialysis of study participants were not statistically different between the two groups (Table 2). Table 2 General characteristics of study participants. Placebo group ( n = 30) Vitamin D group ( n = 30) P a GENDER (%) Male 20 (66.7) 19 (63.4) 0.78 † Female 10 (33.3) 11 (36.6) TYPE OF DIABETES (%) Type 1 2 (6.7) 2 (6.7) 1.00 † Type 2 28 (93.3) 28 (93.3) Smoking (%) 3 (10.0) 3 (10.0) 1.00 † Duration of DM (year) 18.9 ± 6.0 19.1 ± 5.4 0.87 Insulin therapy (%) 30 (100.0) 30 (100.0) 1.00 † Age (y) 60.7 ± 14.3 57.1 ± 13.4 0.31 Height (cm) 162.5 ± 8.0 163.5 ± 8.1 0.65 Weight at study baseline (kg) 71.2 ± 15.7 69.5 ± 11.8 0.65 Weight at end-of-trial (kg) 71.2 ± 15.7 70.1 ± 11.9 0.75 Weight change (kg) 0.008 ± 1.3 0.5 ± 1.3 0.12 BMI at study baseline (kg/m 2 ) 27.0 ± 6.0 26.1 ± 4.5 0.51 BMI at end-of-trial (kg/m 2 ) 27.0 ± 5.9 26.3 ± 4.5 0.61 BMI change (kg/m 2 ) 0.002 ± 0.5 0.2 ± 0.5 0.11 Years on dialysis 3.7 ± 1.0 3.9 ± 1.2 0.56 Cancer (%) 2 (6.7) 2 (6.7) 1.00 † CAD (%) 22 (73.3) 21 (70.0) 0.77 † CVD (%) 5 (16.7) 6 (20.0) 0.73 † Hypertension (%) 28 (93.3) 29 (96.7) 0.55 † Data are means ± SDs . a Obtained from independent t-test . † Obtained from Pearson Chi-square test . CAD, coronary artery disease; CVD, cerebrovascular disease; DM, diabetes mellitus . Considering the 3-day dietary records obtained during the intervention, there was no significant difference in terms of dietary macro- and micro-nutrient intakes between vitamin D and placebo groups (Data not shown). After 12 weeks of intervention, compared to the placebo, vitamin D supplementation significantly increased serum 25-OH-vitamin D (+11.1 ± 4.2 vs. +0.5 ± 3.9 ng/mL, P < 0.001). Results of RT-PCR indicated that compared to the placebo, vitamin D supplementation downregulated gene expression of IL-1β (P = 0.02), TNF-α (P = 0.02), and IFN-γ (P = 0.03) in PBMCs of diabetic HD patients (Figure 2). There was no statistically significant change following supplementation with vitamin D on gene expression of IL-4 and IL-6 in PBMCs of diabetic HD patients. Figure 2Effect of 12-week supplementation with vitamin D or placebo on expression ratio of TNF-α, IFN-γ, IL-1β, IL-4, and IL-6 genes in PBMCs of diabetic HD subjects. P-value was obtained from independent sample t-test. Data are means ± standard deviation. Additionally, vitamin D supplementation, compared to the placebo, downregulated gene expression of TGF-β (P = 0.04), PKC (P = 0.001), and MAPK1 (P = 0.02) in PBMCs of diabetic HD patients (Figure 3). Although not significant, vitamin D supplementation let to a reduction of NF-kB (p = 0.75) expression in PBMCs isolated from diabetic patients compared to the placebo group. We did not observe any significant change following supplementation with vitamin D on gene expression of VEGF in PBMCs of diabetic HD patients. Figure 3Effect of 12-week supplementation with vitamin D or placebo on expression ratio of PKC, MAPK1, TGF-β, NF-κB, and VEGF genes in PBMCs of diabetic HD subjects. P-value was obtained from independent sample t-test. Data are means ± standard deviation.\n\nBODY.DISCUSSION:\nWe found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on few gene expression related to inflammation and oxidative stress. To our knowledge, this is the first report of the effects of vitamin D supplementation on gene expression related to inflammation and oxidative stress in diabetic HD patients. HD subjects are susceptible to several metabolic complications, including increased inflammation and oxidative stress (Kotur-Stevuljevic et al., 2012). Our data demonstrated that vitamin D supplementation for 12 weeks to diabetic HD subjects downregulated gene expression of IL-1β, TNF-α, and IFN-γ compared to the placebo, but did not affect gene expression of IL-4 and IL-6. A study in healthy endurance-trained runners has demonstrated the inverse relationship between vitamin D and TNF-α levels (Willis et al., 2012). Furthermore, in line with our findings, 50 mug/day of vitamin D supplementation for 9 months was able to suppress the production of TNF-α in subjects with congestive heart failure (Schleithoff et al., 2006). 1,25-(OH)2D3 dose-dependently also suppressed the production of IL-α, IL-6, and TNF-α by Escherichia coli lipopolysaccharide-stimulated monocytes (Müller et al., 1992). However, one study has documented no significant changes in TNF-α and IL-6 concentrations in high-dose vitamin D supplementation-healthy overweight populations after 12 weeks (Carrillo et al., 2012). Chronic inflammation in CKD is not only associated with macro- and micro-cardiovascular outcomes, such as atherosclerosis, but is also one of the main players in the progress of malnutrition/protein-energy wasting, which in turn resulted in the description of the malnutrition-inflammation-cachexia syndrome in CKD/ESRD patients (Kalantar-Zadeh, 2005). Hypoalbuminemia result from chronic inflammation and malnutrition is strongly related to mortality in dialysis populations (Kalantar-Zadeh et al., 2005; de Mutsert et al., 2009). Pro-inflammatory factors may directly lead to anorexia via the effect on the brain. Furthermore, inflammatory cytokines, particularly IL-6, may be related to depression in CKD/ESRD patients, which by itself is a predictor of morbidity and mortality (Taraz et al., 2015) and may result in decreasing nutrient intake. Therefore, vitamin D due to its useful effects on inflammatory cytokines may be benefit to decrease clinical and metabolic symptoms in diabetic HD patients. This study demonstrated that vitamin D supplementation for 12 weeks downregulated gene expression of TGF-β, PKC, and MAPK1 in PBMCs of diabetic HD patients, but did not influence gene expression of VEGF and NF-Kβ. Supporting our study, vitamin D administration at a dosage of 50,000 IU/week for 8 weeks in VD-deficient subjects with PCOS significantly decreased the bioavailability of TGF-β1 (Irani et al., 2015). In addition, 1,25-(OH)2D3 decreased gene expression of MAPK phosphorylation in macrophages (Xu et al., 2016). In another study, vitamin D protected human endothelial cells from ionizing radiation induced/oxidative stress by modulating the MAPKs/SirT1 axis (Marampon et al., 2016). Doxercalciferol also significantly decreased PKCα levels suggesting that PKC-mediated cardiac hypertrophy may be related to vitamin D deficiency (Choi et al., 2011). Excessive evidence exists suggesting that MAPK signaling pathways are critical for the synthesis and amplification of inflammatory factors (Carter et al., 1999). Especially, MAPK activation was closely associated with cytokine transcription and translation, which in turn make the target of anti-inflammatory treatment (Kaminska, 2005). However, calcitriol treatment had no significant effect on gene expression of TGF-β in microcardiovascular endothelial cells (Gonzalez-Curiel et al., 2016). In addition, both the low (1 nM) and the high (100 nM) doses of vitamin D3 used in vitro for 48 h did not able to restore the decreased gene expression of PKC isoenzymes in the T cells of systemic lupus erythematosus subjects (Czifra et al., 2014). Oxidative stress in patients with CKD may generate by uremic toxicity, persistent inflammatory state, deficiency of vitamins and microelements, and dialysis procedure itself (Stepniewska et al., 2015). Increased biomarkers of oxidative stress and free radicals in HD subjects are correlated with increased CVD risk and a large number of diseases related to uremia (Taccone-Gallucci et al., 2010).\n\nBODY.CONCLUSIONS:\nOverall, we found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on some gene expression related to inflammation and oxidative stress. This suggests that vitamin D supplementation may confer advantageous therapeutic potential for HD patients. Further research is needed in other participants and for longer periods to determine the efficacy of vitamin D supplementation.\n\nBODY.LIMITATIONS:\nThe limitations of our findings include short duration. Long-term interventions might result in better effects in other gene expression related to inflammation and oxidative stress. In addition, due to funding limitations in developing countries, we did not evaluate some gene expression related to insulin and lipid, and reactive oxygen species levels. Further investigations are required to evaluate this outcome. We did not know baseline levels of vitamin D among our participants, therefore, we were not able to comment on their commended dosage of vitamin D by the Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines. This limitation should be considered in the interpretation of our findings.\n\nBODY.AVAILABILITY OF DATA AND MATERIALS:\nThe primary data for this study is available from the authors on direct request.\n\nBODY.AUTHOR CONTRIBUTIONS:\nZA contributed in conception, data collection and manuscript drafting; HH, ES, HN, AS, MS, FK, and MT contributed in conception, data collection and manuscript drafting; All authors read and approved the final version of the paper.\n\nBODY.AUTHOR CONTRIBUTIONS.CONFLICT OF INTEREST STATEMENT:\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n**Question:** Compared to placebo (n = 30) every 2 weeks for 12 weeks what was the result of vitamin D supplements at a dosage of 50,000 IU (n = 30) every 2 weeks for 12 weeks on gene expression of transforming growth factor beta (TGF-?), protein kinase C (PKC), mitogen-activated protein kinases 1 (MAPK1) ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
889
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effects of Vitamin D Supplementation on Signaling Pathway of Inflammation and Oxidative Stress in Diabetic Hemodialysis: A Randomized, Double-Blind, Placebo-Controlled Trial\n\n ABSTRACT:\nObjective: This study was carried out to determine the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic hemodialysis (HD) patients. Methods: This randomized double-blind placebo-controlled clinical trial was conducted among 60 diabetic HD patients. Subjects were randomly allocated into two groups to intake either vitamin D supplements at a dosage of 50,000 IU (n = 30) or placebo (n = 30) every 2 weeks for 12 weeks. Gene expression of inflammatory cytokines and biomarkers of oxidative stress were assessed in peripheral blood mononuclear cells (PBMCs) of diabetic HD patients with RT-PCR method. Results: Results of RT-PCR indicated that after the 12-week intervention, compared to the placebo, vitamin D supplementation downregulated gene expression of interleukin (IL)-1β (P = 0.02), tumor necrosis factor alpha (TNF-α) (P = 0.02) and interferon gamma (IFN-γ) (P = 0.03) in PBMCs of diabetic HD patients. Additionally, vitamin D supplementation, compared to the placebo, downregulated gene expression of transforming growth factor beta (TGF-β) (P = 0.04), protein kinase C (PKC) (P = 0.001), and mitogen-activated protein kinases 1 (MAPK1) (P = 0.02) in PBMCs of diabetic HD patients. Although not significant, vitamin D supplementation let to a reduction of nuclear factor kappa B (NF-kB) (p = 0.75) expression in PBMCs isolated from diabetic patients compared to the placebo group. There was no statistically significant change following supplementation with vitamin D on gene expression of interleukin (IL)-4, IL-6, and vascular endothelial growth factor (VEGF) in PBMCs of diabetic HD patients. Conclusions: Overall, we found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on few gene expression related to inflammation and oxidative stress. Clinical trial registration: IRCT201701035623N101. Registered on January 8, 2017.\n\nBODY.INTRODUCTION:\nIncreased inflammatory cytokines as a necessary part of chronic kidney disease (CKD) has been identified and associated with cardiovascular disease (CVD), protein-energy malnutrition, all-cause mortality, and decreased glomerular filtration rate (GFR) (Stenvinkel et al., 1999; Akchurin and Kaskel, 2015). Increased biomarkers of inflammation, such as interleukin-1beta (IL-1β) and tumor necrosis factor alpha (TNF-α) were negatively linked to the measures of kidney action and directly with albuminuria (Gupta et al., 2012). Furthermore, various factors, including increased oxygen metabolism, uremic toxicity, increased inflammatory factors, lack of antioxidant vitamins and microelements, and dialysis procedure in patients with CKD would result in oxidative stress (Stepniewska et al., 2015). Prior studies have documented that hypovitaminosis D is very prevalent in individuals with CKD (de Boer, 2008; Cupisti et al., 2015). In a study by Bhan et al. (2010) hypovitaminosis D was reported 50–90% in dialysis subjects. On the other hand, beneficial effects of vitamin D on biochemical variables have reported in hemodialysis (HD) patients (Ibrahim et al., 2015), subjects with type 2 diabetes mellitus (T2DM) (Calvo-Romero and Ramiro-Lozano, 2016) and polycystic ovary syndrome (Maktabi et al., 2017). Few studies have evaluated the effects of vitamin D on gene expression related to inflammation and oxidative stress. Choi et al. (2013) shown significant reductions in gene expression of interleukin 6 (IL-6) and tumor necrosis factor alpha (TNF-α) in vitamin D-treated rats. In addition, in vitro studies have shown that the release of TNF-α can be inhibited by calcitriol in a dose-dependent fashion (Müller et al., 1992). Another study indicated that vitamin D through vitamin D receptor (VDR) suppressed TNF-α-induced nuclear factor kappa B (NF-κB) activation and IL-6 up-regulation in epithelial cell isolated from transgenic mice and colitis animal model (Liu et al., 2013). However, another study supported no significant changes in TNF-α and IL-6 levels in high-dose vitamin D administration in healthy overweight individuals following a 12-week progressive resistance exercise training program (Carrillo et al., 2012). The molecular mechanism of vitamin D in modulating inflammatory cytokines and oxidative stress are still unclear. Mitogen-activated protein kinases (MAPKs) participate in signaling mechanisms in responses to pro-inflammatory factors as well as NF-κB are thought to play an important function in the regulation of pro-inflammatory cytokines in cellular responses (Baldassare et al., 1999). To our knowledge, data on the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic HD patients are limited and controversial. The purpose of this study was, therefore, conducted to evaluate the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic HD patients.\n\nBODY.METHODS.TRIAL DESIGN AND PARTICIPANTS:\nThis randomized double-blind placebo-controlled clinical trial, registered in the Iranian website for registration of clinical trials as http://www.irct.ir:IRCT201701035623N101, was carried out among 60 diabetic HD subjects aged 18-80 years who were referred to the Akhavan Clinic in Kashan, Iran, from January 2017 to April 2017. This research was carried out in accordance with the Declaration of Helsinki and the Research Ethics Committee of Kashan University of Medical Sciences (KAUMS) approved the study protocol, and informed consent was taken from all subjects. The main exclusion criteria from the study were as follows: taking vitamin D, antioxidant and/or anti-inflammatory supplements such as vitamins E and C, omega-3 fatty acids, and taking immunosuppressive medications within 3 months prior to the enrollment in the study.\n\nBODY.METHODS.STUDY DESIGN:\nAt first, all patients were randomized into two groups to receive either 50,000 IU of vitamin D or placebo (n = 30 in each group) every 2 weeks for 12 weeks. Vitamin D supplements and placebos were produced by Zahravi Pharmaceutical Company, Tabriz, Iran, and Barij Essence Pharmaceutical Company, Kashan, Iran, respectively. In the current study, we used the above-mentioned dose of vitamin D based on the tolerable upper intake level (4,000 IU/day) of vitamin D (Ross et al., 2011), as well as dosage recommended in previous studies among HD patients (Armas et al., 2013; Mose et al., 2014). Compliance to the vitamin D consumption was evaluated through determination of serum 25 (OH) vitamin D values and asking subjects to return the medication containers. Both dietary 3-day food records [analyzed by nutritionist IV software (First Databank, San Bruno, CA)] and physical activity records were taken at baseline, weeks 3, 6, 9, and 12 of the intervention.\n\nBODY.METHODS.ASSESSMENT OF ANTHROPOMETRIC MEASURES:\nBody weight and height were quantified in a fasting status using a digital scale (Seca, Hamburg, Germany) at baseline and after the 12-week intervention. Body mass index (BMI) was calculated by weight and height measurements [weight (kg)/height (m2)].\n\nBODY.METHODS.ASSESSMENT OF OUTCOMES:\nPrimary outcomes were inflammatory cytokines expression and secondary outcomes were biomarkers of oxidative stress expression. We selected the following inflammatory cytokines and proteins [IL-1β, IL-4, IL-6,TNF-α, interferon gamma (IFN-γ), transforming growth factor beta (TGF-β), protein kinase C (PKC), MAPK1, vascular endothelial growth factor (VEGF) and NF-κB] as they play a very important role in signaling pathways of oxidative stress (Geraldes and King, 2010; Baker et al., 2011; Tanti et al., 2012; Chen et al., 2015).\n\nBODY.METHODS.ISOLATION OF PERIPHERAL BLOOD MONONUCLEAR CELLS (PBMCS):\nAt baseline and endpoint of the intervention, 15 mL samples of fasting blood were taken at reference laboratory of KAUMS, Kashan, Iran. PBMCs were isolated from blood by centrifugation through a Ficoll-Hypaque (Denholm and Wolber, 1991) followed by Percoll. In summary, 10 mL of blood with an equal volume of phosphate-buffered saline (PBS) was mixed (Dagur and McCoy, 2015). Layer diluted blood over Ficoll-Hypaque solution, using 3 parts diluted blood to 2 parts Ficoll-Hypaque and centrifuged 30 min at 500 × g, 25°C. Then, using a sterile Pasteur pipet, carefully collected PBMCs, and located at the interface between the plasma (upper layer) and Ficoll-Hypaque solution (Dagur and McCoy, 2015). Cells to a 50 mL centrifuge tube were transferred, then after adding 10 mL PBS, were centrifuged 10 min at 400 × g, 4°C. The supernatant was discarded and was resuspended cells in 4 mL PBS. Following previous step, 1.65 mL of 10× Hanks balanced salt solution with 10 mL Percoll was mixed. Eight mL Percoll solution with 4 mL mononuclear cells in a silanized 10 × 1.5 cm round-bottom polypropylene tube was mixed (Dagur and McCoy, 2015). Mixing thoroughly by inverting tube three or four times was done. Then, 25 min at 370 × g, 25°C was centrifuged. PBMCs were aspirated by gentle pipetting into a clean test tube. After centrifugation, monocytes appear as a cloudy layer in the top 5 mm of the gradient (Dagur and McCoy, 2015).\n\nBODY.METHODS.RNA EXTRACTION AND REAL-TIME PCR:\nTo RNA extraction, we used the RNX-plus kit (Cinnacolon, Tehran, Iran). RNA were extracted just after lymphocyte extraction from fresh 5 mL peripheral blood and less than 1 million cells was used for RNA extraction with the trizol reagent (Invitrogen, USA). Following extraction of the total RNAs from each sample, RNA quantification were performed by UV spectrophotometer. Each samples OD 260/280 ratio between 1.7 and 2.1 was intended showing no contamination with both protein and DNA (Dunkley et al., 2008). The isolated RNA was reverse transcribed to cDNA library using moloney murine leukemia virus reverse transcriptase. Gene expression of IL-1β, IL-4, IL-6,TNF-α, IFN-γ, TGF-β, PKC, MAPK1, VEGF, and NF-κB were evaluated by quantitative RT-PCR, using the LightCycler technology (Roche Diagnostics, Rotkreuz, Switzerland) with SYBR green detection and Amplicon Kit (Table 1). Glyceraldehyde-3-phosphate dehydrogenase primers were used as housekeeping gene. To design primers, Primer Express Software (Applied Biosystems, Foster City) and Beacon designer software (Takaposizt, Tehran, Iran) were used. Relative transcription levels were calculated by the method of Pffafi or 2−ΔΔCT. In the current study, we quantified gene expression levels related to inflammation and oxidative stress in PBMCs from diabetic HD patients. PBMCs from venous blood samples are the most available tissue for analysis of gene expression (Mizuarai et al., 2010). In addition, gene expression levels related to inflammation and oxidative stress in PBMCs are more accurate and may provide more valuable information than plasma concentrations (Mizuarai et al., 2010). Few studies have previously assessed gene expression levels of inflammation and oxidative stress in PBMCs from diabetic patients (Xavier et al., 2015; Mazloom et al., 2016). Table 1 Specific primers used for real-time quantitative PCR. Gene Primer Product size (bp) Annealing temperature (C) GAPDH F: GAGTCAACGGATTTGGTCGT 223 58 R: TTGATTTTGGAGGGATCTCG IL1-β F: GATGGCTTATTACAGTGGCAATG 137 59 R: AGTGGTGGTCGGAGATTCG IL-4 F: CTCACAGAGCAGAAGAACAC 221 60 R: TGGTTGGCTTCCTTCACAG IL-6 F: GTAGTGAGGAACAAGCCAGAG 286 60 R: TGACCAGAAGAAGGAATGCC TNF-α F: GAGCCAGCTCCCTCTATTTATG 187 60 R: CTACATGGGAACAGCCTATTGT IFN-γ F: GGTTCTCTTGGCTGTTACTG 250 59.5 R: TGTCTTCCTTGATGGTCTCC VEGF F: TGCAGATTATGCGGATCAAACC 150 58.5 R:TGCATTCACATTTGTTGTGCTGTAG NFK-β F: GCTGAGTCCTGCTCCTTC 202 58 R: GTCTATTTGCTGCCTTGTGG TGF-β F: ACTACTACGCCAAGGAGGTC 250 59 R: CGGTTGCTGAGGTATCGC PKC F: CGTCCTCATTGTCCTCGTAAG 249 59 R: TCATTCCTGCTGGTCAAATCC MAPK1 F: GGAACAGCACCTCCACTATTT 226 60 R: GCCACAATGTCTGCGTATCT GAPDH, glyceraldehyde-3-Phosphate dehydrogenase; IL-1β, interleukin-1β; IFN-γ, interferon gamma; MAPK1, mitogen-activated protein kinases 1; NF-κB, nuclear factor kappa B; PBMCs, peripheral blood mononuclear cells; PKC, protein kinase C; TNF-α, tumor necrosis factor alpha; TGF-β, transforming growth factor beta; VEGF, vascular endothelial growth factor . \n\nBODY.METHODS.RANDOMIZATION:\nRandomization assignment was carried out using computer-generated random numbers. Randomization and allocation concealment were conducted by the researchers and participants and were carried out by a trained staff at the clinic.\n\nBODY.METHODS.STATISTICAL METHODS:\nThe Kolmogorov-Smirnov test was applied to control the normal distribution of variables. Independent sample t-test was used to establish changes in anthropometric measures and dietary intakes between the two groups. To determine the effects of vitamin D administration on inflammation and oxidative stress expression, we used independent sample t-test. P < 0.05 were considered statistically significant. All statistical analyses conducted using the Statistical Package for Social Science version 18 (SPSS Inc., Chicago, Illinois, USA).\n\nBODY.RESULTS:\nSixty diabetic HD patients [vitamin D and placebo (n = 30 each group)] completed the trial (Figure 1). On average, the rate of compliance in our study was high, such that more than 100% of capsules were taken throughout the study in both groups. No side effects were reported following supplementation of vitamin D in diabetic HD patients throughout the study. Potential side effects of vitamin D supplementation monitored by evaluating the serum levels of calcium every month, as well as, monitoring clinical symptoms among participants including, anorexia, nausea, and vomiting. Figure 1Summary of patient flow diagram. Distribution of gender, participants' mean age, height, baseline weight and BMI as well as their change, and years of dialysis of study participants were not statistically different between the two groups (Table 2). Table 2 General characteristics of study participants. Placebo group ( n = 30) Vitamin D group ( n = 30) P a GENDER (%) Male 20 (66.7) 19 (63.4) 0.78 † Female 10 (33.3) 11 (36.6) TYPE OF DIABETES (%) Type 1 2 (6.7) 2 (6.7) 1.00 † Type 2 28 (93.3) 28 (93.3) Smoking (%) 3 (10.0) 3 (10.0) 1.00 † Duration of DM (year) 18.9 ± 6.0 19.1 ± 5.4 0.87 Insulin therapy (%) 30 (100.0) 30 (100.0) 1.00 † Age (y) 60.7 ± 14.3 57.1 ± 13.4 0.31 Height (cm) 162.5 ± 8.0 163.5 ± 8.1 0.65 Weight at study baseline (kg) 71.2 ± 15.7 69.5 ± 11.8 0.65 Weight at end-of-trial (kg) 71.2 ± 15.7 70.1 ± 11.9 0.75 Weight change (kg) 0.008 ± 1.3 0.5 ± 1.3 0.12 BMI at study baseline (kg/m 2 ) 27.0 ± 6.0 26.1 ± 4.5 0.51 BMI at end-of-trial (kg/m 2 ) 27.0 ± 5.9 26.3 ± 4.5 0.61 BMI change (kg/m 2 ) 0.002 ± 0.5 0.2 ± 0.5 0.11 Years on dialysis 3.7 ± 1.0 3.9 ± 1.2 0.56 Cancer (%) 2 (6.7) 2 (6.7) 1.00 † CAD (%) 22 (73.3) 21 (70.0) 0.77 † CVD (%) 5 (16.7) 6 (20.0) 0.73 † Hypertension (%) 28 (93.3) 29 (96.7) 0.55 † Data are means ± SDs . a Obtained from independent t-test . † Obtained from Pearson Chi-square test . CAD, coronary artery disease; CVD, cerebrovascular disease; DM, diabetes mellitus . Considering the 3-day dietary records obtained during the intervention, there was no significant difference in terms of dietary macro- and micro-nutrient intakes between vitamin D and placebo groups (Data not shown). After 12 weeks of intervention, compared to the placebo, vitamin D supplementation significantly increased serum 25-OH-vitamin D (+11.1 ± 4.2 vs. +0.5 ± 3.9 ng/mL, P < 0.001). Results of RT-PCR indicated that compared to the placebo, vitamin D supplementation downregulated gene expression of IL-1β (P = 0.02), TNF-α (P = 0.02), and IFN-γ (P = 0.03) in PBMCs of diabetic HD patients (Figure 2). There was no statistically significant change following supplementation with vitamin D on gene expression of IL-4 and IL-6 in PBMCs of diabetic HD patients. Figure 2Effect of 12-week supplementation with vitamin D or placebo on expression ratio of TNF-α, IFN-γ, IL-1β, IL-4, and IL-6 genes in PBMCs of diabetic HD subjects. P-value was obtained from independent sample t-test. Data are means ± standard deviation. Additionally, vitamin D supplementation, compared to the placebo, downregulated gene expression of TGF-β (P = 0.04), PKC (P = 0.001), and MAPK1 (P = 0.02) in PBMCs of diabetic HD patients (Figure 3). Although not significant, vitamin D supplementation let to a reduction of NF-kB (p = 0.75) expression in PBMCs isolated from diabetic patients compared to the placebo group. We did not observe any significant change following supplementation with vitamin D on gene expression of VEGF in PBMCs of diabetic HD patients. Figure 3Effect of 12-week supplementation with vitamin D or placebo on expression ratio of PKC, MAPK1, TGF-β, NF-κB, and VEGF genes in PBMCs of diabetic HD subjects. P-value was obtained from independent sample t-test. Data are means ± standard deviation.\n\nBODY.DISCUSSION:\nWe found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on few gene expression related to inflammation and oxidative stress. To our knowledge, this is the first report of the effects of vitamin D supplementation on gene expression related to inflammation and oxidative stress in diabetic HD patients. HD subjects are susceptible to several metabolic complications, including increased inflammation and oxidative stress (Kotur-Stevuljevic et al., 2012). Our data demonstrated that vitamin D supplementation for 12 weeks to diabetic HD subjects downregulated gene expression of IL-1β, TNF-α, and IFN-γ compared to the placebo, but did not affect gene expression of IL-4 and IL-6. A study in healthy endurance-trained runners has demonstrated the inverse relationship between vitamin D and TNF-α levels (Willis et al., 2012). Furthermore, in line with our findings, 50 mug/day of vitamin D supplementation for 9 months was able to suppress the production of TNF-α in subjects with congestive heart failure (Schleithoff et al., 2006). 1,25-(OH)2D3 dose-dependently also suppressed the production of IL-α, IL-6, and TNF-α by Escherichia coli lipopolysaccharide-stimulated monocytes (Müller et al., 1992). However, one study has documented no significant changes in TNF-α and IL-6 concentrations in high-dose vitamin D supplementation-healthy overweight populations after 12 weeks (Carrillo et al., 2012). Chronic inflammation in CKD is not only associated with macro- and micro-cardiovascular outcomes, such as atherosclerosis, but is also one of the main players in the progress of malnutrition/protein-energy wasting, which in turn resulted in the description of the malnutrition-inflammation-cachexia syndrome in CKD/ESRD patients (Kalantar-Zadeh, 2005). Hypoalbuminemia result from chronic inflammation and malnutrition is strongly related to mortality in dialysis populations (Kalantar-Zadeh et al., 2005; de Mutsert et al., 2009). Pro-inflammatory factors may directly lead to anorexia via the effect on the brain. Furthermore, inflammatory cytokines, particularly IL-6, may be related to depression in CKD/ESRD patients, which by itself is a predictor of morbidity and mortality (Taraz et al., 2015) and may result in decreasing nutrient intake. Therefore, vitamin D due to its useful effects on inflammatory cytokines may be benefit to decrease clinical and metabolic symptoms in diabetic HD patients. This study demonstrated that vitamin D supplementation for 12 weeks downregulated gene expression of TGF-β, PKC, and MAPK1 in PBMCs of diabetic HD patients, but did not influence gene expression of VEGF and NF-Kβ. Supporting our study, vitamin D administration at a dosage of 50,000 IU/week for 8 weeks in VD-deficient subjects with PCOS significantly decreased the bioavailability of TGF-β1 (Irani et al., 2015). In addition, 1,25-(OH)2D3 decreased gene expression of MAPK phosphorylation in macrophages (Xu et al., 2016). In another study, vitamin D protected human endothelial cells from ionizing radiation induced/oxidative stress by modulating the MAPKs/SirT1 axis (Marampon et al., 2016). Doxercalciferol also significantly decreased PKCα levels suggesting that PKC-mediated cardiac hypertrophy may be related to vitamin D deficiency (Choi et al., 2011). Excessive evidence exists suggesting that MAPK signaling pathways are critical for the synthesis and amplification of inflammatory factors (Carter et al., 1999). Especially, MAPK activation was closely associated with cytokine transcription and translation, which in turn make the target of anti-inflammatory treatment (Kaminska, 2005). However, calcitriol treatment had no significant effect on gene expression of TGF-β in microcardiovascular endothelial cells (Gonzalez-Curiel et al., 2016). In addition, both the low (1 nM) and the high (100 nM) doses of vitamin D3 used in vitro for 48 h did not able to restore the decreased gene expression of PKC isoenzymes in the T cells of systemic lupus erythematosus subjects (Czifra et al., 2014). Oxidative stress in patients with CKD may generate by uremic toxicity, persistent inflammatory state, deficiency of vitamins and microelements, and dialysis procedure itself (Stepniewska et al., 2015). Increased biomarkers of oxidative stress and free radicals in HD subjects are correlated with increased CVD risk and a large number of diseases related to uremia (Taccone-Gallucci et al., 2010).\n\nBODY.CONCLUSIONS:\nOverall, we found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on some gene expression related to inflammation and oxidative stress. This suggests that vitamin D supplementation may confer advantageous therapeutic potential for HD patients. Further research is needed in other participants and for longer periods to determine the efficacy of vitamin D supplementation.\n\nBODY.LIMITATIONS:\nThe limitations of our findings include short duration. Long-term interventions might result in better effects in other gene expression related to inflammation and oxidative stress. In addition, due to funding limitations in developing countries, we did not evaluate some gene expression related to insulin and lipid, and reactive oxygen species levels. Further investigations are required to evaluate this outcome. We did not know baseline levels of vitamin D among our participants, therefore, we were not able to comment on their commended dosage of vitamin D by the Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines. This limitation should be considered in the interpretation of our findings.\n\nBODY.AVAILABILITY OF DATA AND MATERIALS:\nThe primary data for this study is available from the authors on direct request.\n\nBODY.AUTHOR CONTRIBUTIONS:\nZA contributed in conception, data collection and manuscript drafting; HH, ES, HN, AS, MS, FK, and MT contributed in conception, data collection and manuscript drafting; All authors read and approved the final version of the paper.\n\nBODY.AUTHOR CONTRIBUTIONS.CONFLICT OF INTEREST STATEMENT:\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n**Question:** Compared to placebo (n = 30) every 2 weeks for 12 weeks what was the result of vitamin D supplements at a dosage of 50,000 IU (n = 30) every 2 weeks for 12 weeks on nuclear factor kappa B (NF-kB), interleukin (IL)-4, IL-6, and vascular endothelial growth factor (VEGF) ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
895
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Belatacept-Based Immunosuppression in \n\n ABSTRACT:\nThis exploratory phase II study evaluated the safety and efficacy of belatacept in de novo adult liver transplant recipients. Patients were randomized (N = 260) to one of the following immunosuppressive regimens: (i) basiliximab + belatacept high dose [HD] + mycophenolate mofetil (MMF), (ii) belatacept HD + MMF, (iii) belatacept low dose [LD] + MMF, (iv) tacrolimus + MMF, or (v) tacrolimus alone. All received corticosteroids. Demographic characteristics were similar among groups. The proportion of patients who met the primary end point (composite of acute rejection, graft loss, death by month 6) was higher in the belatacept groups (42–48%) versus tacrolimus groups (15–38%), with the highest number of deaths and grafts losses in the belatacept LD group. By month 12, the proportion surviving with a functioning graft was higher with tacrolimus + MMF (93%) and lower with belatacept LD (67%) versus other groups (90%: basiliximab + belatacept HD; 83%: belatacept HD; 88%: tacrolimus). Mean calculated GFR was 15–34 mL/min higher in belatacept-treated patients at 1 year. Two cases of posttransplant lymphoproliferative disease and one case of progressive multifocal leukoencephalopathy occurred in belatacept-treated patients. Follow-up beyond month 12 revealed an increase in death and graft loss in another belatacept group (belatacept HD), after which the study was terminated.\n\nBODY.INTRODUCTION:\nLiver transplantation is a life-saving procedure for patients with end-stage liver disease (ESLD) that improves overall survival and quality of life (1,2). Liver transplant (LT) recipients are at increased risk for cardiovascular disease, and chronic kidney disease (CKD; (3–7)). Stage 4 or 5 CKD has been reported in ∼18% of LT recipients by 5 years posttransplant (4) and is associated with increased morbidity and mortality (4–7). Data from 1997 to 2008 in the Organ Procurement Transplant Network/United Network for Organ Sharing (OPTN/UNOS) database showed a threefold increased rate of kidney transplantation in patients who previously received LT (8). While the causes of CKD in LT patients are multifactorial, including pretransplant and peritransplant factors like hepatitis C virus (HCV), diabetes and hepatorenal syndrome, calcineurin inhibitors (CNIs) appear to be significant contributing factors (4,7,9–11). In the OPTN/UNOS analysis, ∼50% of LT patients who received a kidney transplant had a diagnosis consistent with CNI toxicity (8). Thus, there is a need for immunosuppressive regimens that provide efficacy while avoiding the nephrotoxic, cardiovascular and metabolic risks of CNIs in LT recipients. Belatacept, a selective costimulation blocker (12), is designed to provide effective immunosuppression and avoid both renal and nonrenal toxicities associated with CNIs. Results from two phase III clinical trials in kidney transplant recipients found that belatacept-based immunosuppression was associated with similar rates of patient and graft survival, significantly better renal function and an improved cardiovascular/metabolic risk profile versus cyclosporine-based therapy ≤3 years after transplantation (13–17). Higher rates and grades of acute rejection (AR) were observed in belatacept-treated patients receiving standard-criteria donor kidneys (13,15). Posttransplant lymphoproliferative disease (PTLD) with central nervous system (CNS) involvement, primarily in patients seronegative for Epstein–Barr virus (EBV) at the time of transplant and progressive multifocal leukoencephalopathy (PML) were the most serious adverse events (AEs) reported in belatacept-treated patients (13,14,18). The objective of this exploratory, phase II study was to evaluate the efficacy and safety of belatacept in adult recipients of first LTs from a deceased donor. To identify an optimal immunosuppressive regimen in LT recipients, three belatacept regimens were studied and compared with two tacrolimus regimens. In addition, a follow-up long-term extension (LTE) study (i.e. ≥12 months) was conducted to assess longer-term safety and tolerability. The LTE study was terminated in 2011 (when patients were ∼2 years posttransplant) because of worse patient and graft survival in two of the three belatacept treatment groups. Both the 12-month results and those from the LTE are reported herein.\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN:\nThis was a randomized, partially blinded, active-controlled, parallel group, multicenter, phase II clinical trial in adult recipients of first LTs (ClinicalTrials.gov: NCT00555321). Three belatacept regimens were studied, representing a stepwise decrease in the level of overall immunosuppression, and compared with the approved immunosuppressive regimen (tacrolimus alone) and the most widely used immunosuppressive regimen (tacrolimus + mycophenolate mofetil [MMF]) in LT. This exploratory study intended to assess the comparability of a belatacept-based regimen with a tacrolimus-based regimen in terms of AR, graft loss and death. Immediately before transplantation, patients were randomized to one of the following treatment groups: (i) basiliximab + belatacept high dose (HD) + MMF; (ii) belatacept HD + MMF; (iii) belatacept low dose (LD) + MMF; (iv) tacrolimus + MMF or (v) tacrolimus alone. Subjects were randomized in a 1:1:1:1:1 ratio using an interactive voice response system with centralized randomization and stratified by HCV status (yes or no) in blocks of five. All patients received corticosteroid therapy for the first 3 months. The trial was fully blinded to patients and study personnel with respect to belatacept dosing regimen (HD or LD) and basiliximab assignment (through the use of placebo infusions), open-label to belatacept or tacrolimus treatment and open-label between the two tacrolimus groups. The study duration was 12 months with an LTE (study was initiated January 22, 2008 and completed May 2, 2011). An external Data Monitoring Committee (DMC) comprising a chair and four members (specialty physicians and one statistician) reviewed emerging safety and efficacy data on a regular basis. The study was conducted in accordance with the Declaration of Helsinki and was consistent with International Conference on Harmonisation Good Clinical Practice and applicable regulatory requirements. The study protocol and amendments were reviewed and approved by the institutional review board/independent ethics committee for each site before initiation of the study (see Table S1 for enrollment by site).\n\nBODY.MATERIALS AND METHODS.PATIENTS AND INTERVENTIONS:\nThe study population included adults of 18–70 years of age, who were recipients of first LTs from a deceased donor (see Table 1 for additional inclusion and exclusion criteria). Belatacept was administered via intravenous (IV) infusion. Table 1 Study inclusion and exclusion criteria Inclusion criteria Exclusion criteria Adults of 18–70 years of age, who were recipients of first LTs from a deceased donor Donation after cardiac death Informed consent from all patients Living-donor recipients Additional inclusion criteria Split liver recipients Reliable IV access Recipients of organs ABO compatible donor-recipient pairs From donors <12 years or >65 years of age Patients with hepatocellular carcinoma meeting Milan criteria (one nodule ≤5 cm in diameter or three or fewer nodules, with none >3 cm in diameter) With anticipated cold ischemia time >14 h From donors who were positive for HBV or HCV when recipients were negative for HBV or HCV, respectively From donors with known human immunodeficiency virus infection Patients receiving dialysis before LT for ≥2 consecutive weeks before enrollment or who were anticipated to have prolonged dialysis posttransplant Patients with known intrinsic kidney disease (e.g. a urine protein/creatinine ratio >150 mg/g or presence of an abnormal number of red blood cells or granular casts in urine) AND a calculated GFR (cGFR) <40 mL/min/1.73 m 2 body surface area (modified MDRD) within 1 month of enrollment Patients with acute liver failure, hypercoagulable state or malignancy within the previous 5 years (except for nonmelanoma skin cancer cured by local resection or hepatocellular carcinoma as defined above) Patients who were seronegative for Epstein–Barr virus (subsequent study amendment) HBV, hepatitis B virus; HCV, hepatitis C virus; IV, intravenous; LT, liver transplant. \n\nBODY.MATERIALS AND METHODS.PATIENTS AND INTERVENTIONS.HD REGIMENS (GROUPS 1 AND 2):\nBelatacept 10 mg/kg was given on days 1 (day of transplant), 3 and 5, and weeks 2, 4, 6, 8, 10, 12, 16, 20 and 24 for the first 6 months and then 5 mg/kg every 4 weeks from months 7–12.\n\nBODY.MATERIALS AND METHODS.PATIENTS AND INTERVENTIONS.LD REGIMEN (GROUP 3):\nBelatacept 10 mg/kg was given on days 1, 3 and 5, and weeks 2, 4, 8 and 12 for the first 3 months and then 5 mg/kg every 4 weeks from months 3–12. The belatacept doses in the present study were different from those studied in renal transplant recipients in that an additional belatacept dose was administered on day 3 in all belatacept treatment groups, and one additional dose was allowed for excessive bleeding (>3L) or ascites loss (>4L) during the first 2 weeks following LT. Eight patients in the basiliximab + belatacept HD + MMF group, 9 patients in the belatacept HD + MMF group and 10 patients in the belatacept LD + MMF group received an additional dose of belatacept for bleeding or ascites. Tacrolimus (groups 4 and 5) was administered orally at an initial dose of 0.10–0.15 mg/kg/day after transplantation; doses were adjusted to achieve target trough concentrations between 6 and 12 ng/mL. Basiliximab induction (20 mg IV) was given to patients in group 1 only (belatacept HD regimen) on days 1 and 5. Patients in groups 1 through 4 received MMF at 2 g/day orally, which was later amended to 1 g/day after a case of PML was reported in a patient receiving belatacept HD + MMF. At that time, all but two had received ≥6 months of study treatment. All patients received tapered corticosteroids (Figure 1). Figure 1Patient disposition and dosing. All patients received corticosteroids on days 1–5, which was tapered to ≤10 mg/day by day 30 and ≤5 mg/day by day 90. Thereafter, withdrawal of corticosteroids was at the discretion of the investigator. HD, high dose; LD, low dose; MMF, mycophenolate mofetil. All patients received antiviral prophylaxis for cytomegalovirus (CMV) and herpes simplex virus for ≥3 months posttransplant and for 3 months upon administration of T cell–depleting agents. Additionally, all patients were to receive a 6-month course of prophylaxis for Pneumocystis jiroveci pneumonia.\n\nBODY.MATERIALS AND METHODS.OUTCOMES:\nThe primary end point was the composite incidence of AR, graft loss and death at 6 months after transplantation. AR was included in the composite end point if it was clinically suspected and proven via biopsy. All biopsies for suspected AR were assessed by a central histopathologist blinded to treatment assignment using the Banff schema for grading of LT rejection and rejection activity index for staging (19). Graft loss was defined as impairment of liver function that resulted in patient death or re-transplantation. Secondary end points included the incidence, severity, treatment and outcome of AR by 12 months; graft loss and death by 12 months; change in renal function over time as determined by measured GFR (mGFR) and calculated GFR (cGFR; MDRD methodology); incidence of HCV recurrence by 12 months, defined as histologic confirmation on liver biopsy scored by the Ishak (modified Knodell) system, a score of ≥5 of 18 on modified histological activity index grading, and a fibrosis score of ≥2 of 6 (20). Incidence rates of cardiovascular and metabolic comorbidities (i.e. posttransplant diabetes mellitus, dyslipidemia and hypertension) and the overall safety of the belatacept-based regimens were also evaluated. Liver allograft biopsies (confirmed by a central pathology laboratory) were obtained at baseline and at 12 months to assess histology. Biopsies were also performed in patients who met protocol-specified criteria for clinical suspicion of AR or HCV recurrence.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSES:\nA statistical analysis plan was prepared prior to unblinding this study. The purpose of the study was to provide initial clinical experience regarding the efficacy and safety of belatacept in LT recipients and identify a belatacept-based regimen with an acceptable composite rate of AR, graft loss and death. Therefore, statistical testing was not prespecified and the study was not powered to demonstrate a difference between treatment groups. A calculated sample size of 50 patients per treatment group was determined to provide initial data regarding safety and efficacy. If the observed primary end point (composite incidence of AR, graft loss and death by 6 months) for a belatacept-based group was 40%, it was estimated that the two-sided 95% confidence interval (CI) of this incidence would extend from 26.4% to 53.6%, which would be within the general range observed in large LT studies that used CNI-based immunosuppression (21,22). Analyses were performed on the intent-to-treat population (i.e. those who were randomized and received a transplant). Most analyses were descriptive in nature, using point estimates and 95% CIs. mGFR was assessed via iothalamate at months 2 (baseline) and 12. cGFR was assessed pretransplant and at 1, 2, 3, 6 and 12 months.\n\nBODY.RESULTS:\nA total of 260 patients at 39 centers worldwide were randomized, of whom 250 received a transplant and were treated. A total of 164 patients completed 12 months of treatment (Figure 1). Demographic and baseline characteristics were generally similar across treatment groups (Table 2). The mean age was 54 years, and 46% of patients were HCV-positive. The primary reasons for discontinuing treatment, as defined by the study investigators, were AEs and lack of efficacy in the belatacept groups and AEs in the tacrolimus groups. Table 2 Demographic characteristics of recipients by treatment group Characteristic Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac (n = 50) (n = 48) (n = 49) (n = 53) (n = 50) Mean age, years 54.0 53.4 55.2 53.0 54.7 Sex, male, % 78 71 63 87 84 Race, % White 88 83 94 93 86 Black 8 6 2 6 4 Cause of ESLD, % Noncholestatic cirrhosis 76 69 67 74 82 HCV-positive, % 46 48 43 47 48 Mean MELD score 22.6 21.1 20.6 24.3 21.6 Hepatorenal syndrome, % 10 10 10 13 6 ESLD, end-stage liver disease; HCV, hepatitis C virus; HD, high dose, LD, low dose; MELD, Model for End-Stage Liver Disease; MMF, mycophenolate mofetil; Tac, tacrolimus. An imbalance in the frequency of deaths and graft losses in the belatacept LD group early in the study was noted by the DMC. Based on the DMC recommendation, randomization to the belatacept LD group was stopped. Study enrollment, however, was nearly complete (247/250 patients), and patients were permitted to remain on the belatacept LD regimen at the discretion of the investigator. Randomization into the other four treatment groups continued.\n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.COMPOSITE END POINT:\nAt the time of the primary analysis at month 6, the incidence rates of the composite end point were higher in the belatacept groups (42–48%) than in the tacrolimus groups (15–38%) (Table 3). This difference was primarily driven by a higher number of AR episodes in the belatacept groups than in the tacrolimus groups. At month 12, patient and graft survival were highest in the tacrolimus + MMF group and lowest in the belatacept LD group. The proportion of patients surviving with a functioning graft was 90%, 83% and 67% in the basiliximab + belatacept HD, belatacept HD, and belatacept LD groups, respectively, and 93% and 88% in the tacrolimus + MMF and tacrolimus groups, respectively. Table 3 Outcomes: composite end point (death, graft loss and AR) by 6 months (primary end point) and 12 months and AR up to month 12 Composite end point Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac (n = 50) (n = 50) (n = 48) (n = 49) (n = 53) 6 months (primary end point), n (%) 24 (48.0) 20 (41.7) 23 (46.9) 8 (15.1) 19 (38.0) Difference from Tac + MMF, % (95% CI) 32.9 (16.1–49.8) 26.6 (9.6–43.5) 31.8 (14.8–48.5) — — Difference from Tac, % (95% CI) 10.0 (−8.7–29.6) 3.7 (−15.3–23.2) 8.9 (−9.8–28.4) — — AR, n 20 15 15 5 15 Death, n 4 4 6 1 3 Graft loss, n 2 2 6 4 3 Survival with a functioning graft, n (%) 45 (90.0) 43 (89.6) 38 (77.6) 49 (92.5) 45 (90.0) (95% CI) (81.7–98.3) (80.9–98.2) (65.9–89.2) (85.3–99.6) (81.7–98.3) 12 months, n (%) 26 (52.0) 23 (47.9) 26 (53.1) 10 (18.9) 20 (40.0) AR, n 22 16 16 7 15 Death, n 4 7 10 1 4 Graft loss, n 2 2 8 4 4 Survival with a functioning graft, n (%) 45 (90.0) 40 (83.3) 33 (67.3) 49 (92.5) 44 (88.0) (95% CI) (81.7–98.3) (72.8–93.9) (54.2–80.5) (85.3–99.6) (79.0–97.0) AR (centrally read) up to month 12, n (%) Patients with AR 22 (44.0) 16 (33.3) 16 (32.7) 7 (13.2) 15 (30.0) Treated, n/N (%) 12/22 (54.5) 12/16 (75.0) 8/16 (50.0) 5/7 (71.4) 12/15 (80.0) Corticosteroids only, n (%) 12 (24.0) 7 (14.6) 8 (16.3) 4 (7.5) 10 (20.0) Initial lymphocyte-depleting therapy, n (%) 0 (0) 3 (14.6) 0 (0) 0 (0) 0 (0) Grade of AR, n (%) I 15 (30.0) 7 (14.6) 7 (14.3) 6 (11.3) 7 (14.0) II 7 (14.0) 8 (16.7) 8 (16.3) 1 (1.9) 6 (12.0) III 0 (0) 1 (2.1) 1 (2.0) 0 (0) 2 (4.0) Death or graft loss in patients with AR, n 1 1 6 1 1 AR, acute rejection; CI, confidence interval; HD, high dose, LD, low dose; MMF, mycophenolate mofetil; Tac, tacrolimus. \n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.DEATH:\nAt the 6-month primary analysis, a higher number of deaths was observed in patients treated with belatacept (basiliximab + belatacept HD [n = 4], belatacept HD [n = 4], belatacept LD [n = 6]) than in those treated with tacrolimus (tacrolimus + MMF [n = 1], tacrolimus [n = 3]). A similar pattern was also observed at 12 months posttransplant (Table 3). The causes of death reported in more than one patient included multisystem organ failure (n = 7), sepsis (n = 7), gastrointestinal bleeding (n = 2) and myocardial infarction (n = 2). In two cases, the cause of death was unknown (Table 4). Table 4 Causes of death Treatment group Age/sex Reported cause of death Study day Basiliximab + belatacept HD + MMF 57/M Sepsis 18 61/M Myocardial infarction 23 49/M Multiple system organ failure 25 56/M Sepsis 127 Belatacept HD + MMF 43/M Sepsis 24 37/M Sepsis 84 62/F Multiple system organ failure 111 47/M Gastrointestinal bleed 147 62/F Unknown 274 58/M Unknown 278 51/M PML 322 Belatacept LD + MMF 65/M Multiple system organ failure 2 59/F Colon perforation 8 48/M Acute hepatic failure 16 63/F Pulmonary failure 21 53/M Sepsis 65 50/M Gunshot injury 117 58/F Multiple system organ failure 202 55/M Multiple system organ failure 208 65/F Multiple system organ failure 339 51/M PTLD 364 Tacrolimus + MMF 54/M Sepsis 168 Tacrolimus 49/M Myocardial infraction 34 52/M Multiple system organ failure 63 46/M Gastrointestinal bleed 68 49/M Sepsis 286 F, female; HD, high dose, LD, low dose; M, male; MMF, mycophenolate mofetil; PML, progressive multifocal leukoencephalopathy; PTLD, posttransplant lymphoproliferative disease. \n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.GRAFT LOSS:\nBy month 6, numerically fewer cases of death-censored graft loss were reported in the belatacept HD groups (n = 1 each) than in the belatacept LD group (n = 5) and tacrolimus groups (tacrolimus + MMF [n = 3]; tacrolimus [n = 2]). At month 12, a similar pattern was observed. Across treatment groups, ∼50% of graft losses occurred within the first month posttransplant, the main causes of which were primary nonfunction (n = 5) and arterial thrombosis (n = 4).\n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.ACUTE REJECTION:\nOverall, AR (centrally read) was more common in the belatacept groups than in the tacrolimus groups (Table 3). At the month 6 primary analysis, the percent of patients with AR in the basiliximab + belatacept HD, belatacept HD and belatacept LD groups was 40%, 31% and 31%, respectively, versus 9% and 30% for the tacrolimus + MMF and tacrolimus groups, respectively. Most episodes of AR (57–94%) occurred early after transplant (i.e. by month 3) and were mild to moderate in severity (Banff grade I or II). Among patients with AR, ∼50% received treatment for AR in the basiliximab + belatacept HD and belatacept LD groups, while 70–80% received treatment in the other three groups. The majority of patients who were treated for AR received corticosteroids only.\n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.ANALYSIS BY HCV STATUS:\nIn HCV-positive patients, the percent with AR by month 6 in the belatacept groups ranged from 33% to 39% versus 12% and 38% for the tacrolimus + MMF and tacrolimus groups, respectively. In patients who were HCV-negative, the percent with AR by month 6 in the belatacept groups was 29–41% versus 7% and 23% for the tacrolimus + MMF and tacrolimus groups, respectively.\n\nBODY.RESULTS.SAFETY:\nSerious AEs, including serious infections and malignancies, occurred with a similar frequency across all treatment groups (Table 5). The most commonly reported serious AEs across all groups included pneumonia, sepsis, biliary strictures, cholangitis, pyrexia and acute renal failure. The most common serious infections resulted from CMV, fungal and mycobacterial pathogens. One fatal case of PML occurred ∼6 months after transplantation in a patient receiving belatacept HD and higher than recommended doses of MMF (3–4 g/day for 7.5 weeks). Table 5 Adverse events and events of interest at 12 months Events Patients Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac (n = 50) (n = 48) (n = 49) (n = 53) (n = 50) HCV at baseline, n 23 23 21 25 24 HCV recurrence, 1 n (%) 14 (60.9) 7 (30.4) 6 (28.6) 13 (52.0) 9 (37.5) Serious adverse events, n (%) 28 (56.0) 29 (60.4) 37 (75.5) 40 (75.5) 35 (70.0) Infections and infestations 11 (22.0) 12 (25.0) 13 (26.5) 12 (22.6) 12 (24.0) Malignancies, n 1 0 2 2 2 PTLD 1 2 0 1 0 0 All infections and infestations, n (%) 32 (64.0) 39 (81.3) 30 (61.2) 31 (58.5) 29 (58.0) Bacterial 5 (10.0) 11 (22.9) 11 (22.4) 6 (11.3) 13 (26.0) Fungal infections 6 (12.0) 9 (18.8) 14 (28.6) 6 (11.3) 5 (10.0) Viral infections 3 10 (20.0) 11 (22.9) 14 (28.6) 9 (17.0) 7 (14.0) CMV 5 (10.0) 4 (8.3) 10 (20.4) 4 (7.5) 1 (2.0) Herpes 3 (6.0) 3 (6.3) 4 (8.2) 3 (5.7) 2 (4.0) Adverse events of nervous system disorders, n (%) 23 (46.0) 19 (39.6) 15 (30.6) 34 (64.2) 34 (68.0) Headache 10 (20.0) 8 (16.7) 5 (10.2) 14 (26.4) 14 (28.0) Tremor 2 (4.0) 2 (4.2) 4 (8.2) 17 (32.1) 13 (26.0) Adverse events of renal and urinary disorders, n (%) Renal failure 1 (2.0) 4 (8.3) 3 (6.1) 5 (9.4) 14 (28.0) Acute renal failure 2 (4.0) 5 (10.4) 2 (4.1) 13 (24.5) 8 (16.0) Renal impairment 0 (0) 1 (2.1) 0 (0) 1 (1.9) 8 (16.0) CMV, cytomegalovirus; HAI, histological activity index; HCV, hepatitis C virus; HD, high dose, LD, low dose; MMF, mycophenolate mofetil; PML, progressive multifocal leukoencephalopathy; PTLD, posttransplant lymphoproliferative disease; Tac, tacrolimus. 1 HCV recurrence confirmed histologically by central pathologist; modified HAI grading score ≥5/18 and fibrosis score ≥2. 2 PTLD case occurred after month 12. 3 One case of PML in belatacept HD + MMF. Malignancies were reported in 2–4% of belatacept patients and in 4% of patients receiving tacrolimus-based treatments. Two cases of PTLD were reported in belatacept-treated patients (one in the belatacept LD group at 11 months posttransplant and one in the basiliximab + belatacept HD group after month 12), both of which involved the liver. Neither case involved the CNS; both patients were EBV-seropositive at the time of transplantation. One patient died as a result of PTLD; the other was treated with chemotherapy and is alive with a functioning graft. Overall, most infections were mild or moderate in severity (Table 5); urinary tract infections were the most common. Patients in the belatacept LD group had numerically higher rates of viral and fungal infections (Table 5), most of which were nonserious. A similar proportion of patients across all treatment groups experienced at least one AE. AEs related to neurotoxicity (i.e. headache and tremor) occurred less frequently in patients receiving belatacept-based regimens versus tacrolimus-based regimens (Table 5).\n\nBODY.RESULTS.HCV RECURRENCE:\nApproximately 72% (153/212) of patients with a functioning graft at month 12 had biopsies available for evaluation. Recurrence of HCV among patients who were HCV-positive at baseline was higher in the basiliximab + belatacept HD (61%) and tacrolimus + MMF (52%) groups versus the other treatment groups (29–38%) (Table 5).\n\nBODY.RESULTS.RENAL FUNCTION:\nMean cGFR at baseline was 66–80 mL/min. The differences in cGFR between the belatacept-treated patients and tacrolimus-treated patients were observed as early as month 1 and persisted through month 12 (15–34 mL/min/1.73 m2 higher in each belatacept group vs. the tacrolimus groups at month 12) (Figure 2). By month 12 in the intent-to-treat analysis, mean mGFR was 89–93 mL/min in the belatacept HD groups and 71–75 mL/min in the belatacept LD and tacrolimus groups. An on-treatment analysis revealed similar findings at 12 months (mean cGFR was 88 mL/min for basiliximab + belatacept HD; 100 mL/min for belatacept HD; 94 mL/min for belatacept LD; 67 mL/min for tacrolimus + MMF and 62 mL/min for tacrolimus). The proportion of patients with >10 mL/min improvement in cGFR from baseline to month 12 was 47–65% in the belatacept groups versus 12–27% in the tacrolimus groups. Figure 2Mean calculated GFR (MDRD methodology) over time (intent-to-treat analysis, as observed data with no imputation for missing values). All calculated GFR >200 were truncated at 200 mL/min. HD, high dose; LD, low dose; MMF, mycophenolate mofetil.\n\nBODY.RESULTS.CARDIOVASCULAR/METABOLIC PROFILE AT 12 MONTHS:\nSystolic and diastolic blood pressures at 12 months were lower among patients receiving belatacept (121–127 mmHg and 75–77 mmHg, respectively) versus those receiving tacrolimus-based regimens (137–138 mmHg and 80 mmHg, respectively). Serum lipids increased from baseline in all treatment groups, with triglyceride levels increasing more in the tacrolimus groups and LDL levels increasing more in the belatacept groups (data not shown). The incidence of new-onset diabetes mellitus (i.e. ≥30 days of treated diabetes in patients without a diagnosis of diabetes before randomization) was somewhat lower in the belatacept HD and LD groups (16% and 14%, respectively) versus patients receiving basiliximab + belatacept HD (36%), tacrolimus + MMF (24%), and tacrolimus (38%).\n\nBODY.RESULTS.PHARMACOKINETICS:\nThe pharmacokinetics of belatacept in LT recipients was similar to those in kidney transplantation recipients receiving a similar dosing regimen (23). The pattern of minimum belatacept concentration values with the HD and LD regimens in the early posttransplant period were consistent with the differences between the two regimens. Steady-state trough serum concentrations of belatacept in the maintenance phase (during 5-mg/kg dosing) were similar between the HD and LD regimens. Mean serum trough concentrations of belatacept and tacrolimus, and mean daily MMF and steroid doses are shown in Figure 3. Some of the patients who received additional belatacept to compensate for fluid loss went on to experience death or graft loss, but results were mixed and potentially confounded by intra- and postoperative complications. In the overall study population, there was no association between trough belatacept levels and death or graft loss. Figure 3Serum trough levels and mean daily dosing at specified time points. Concentrations of belatacept in human serum were determined using a validated enzyme-linked immunosorbent assay (ELISA) by PPD (Richmond, VA). (A) Mean serum trough levels of belatacept. (B) Mean daily dose of MMF. (C) Mean serum trough levels of tacrolimus. (D) Mean daily dose of steroids. HD, high dose; LD, low dose; M, month; MMF, mycophenolate mofetil; W, week.\n\nBODY.RESULTS.DONOR-SPECIFIC ANTIBODIES:\nAll treatment groups had patients who tested positive for donor-specific antibodies (DSA) at baseline (Table 6). By 12 months after transplantation, the number of patients with DSA decreased in two belatacept groups (HD and LD), remained the same in two groups (basiliximab + belatacept HD and tacrolimus), and increased in one group (tacrolimus + MMF) (Table 6). There were few cases of de novo DSA in all groups. Table 6 DSA: number of patients with detectable antidonor HLA antibodies Patients with DSA/patients in analysis, n (%) Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac Baseline DSA (pretransplant) 5/48 (10) 5/46 (11) 8/48 (17) 3/50 (6) 4/49 (8) Total DSA by month 12 5/47 (11) 3/47 (6) 4/43 (9) 8/52 (15) 4/45 (9) De novo DSA 1 by month 12 3/47 (6) 1/47 (2) 1/43 (2) 6/52 (12) 3/45 (7) DSA, donor-specific antibodies; HD, high dose, LD, low dose; MMF, mycophenolate mofetil; Tac, tacrolimus. 1 De novo DSA was defined as appearance of antibody to a new HLA specificity following transplantation. \n\nBODY.RESULTS.LONG-TERM EXTENSION:\nOf the 164 patients who completed 1 year of treatment, 145 entered the LTE phase (Figure 4). During the LTE, additional deaths and graft losses were noted in the belatacept HD group (n = 4). Because of the cumulative number of deaths and graft losses in two of the three belatacept groups relative to the tacrolimus groups, the DMC recommended termination of the LTE study. Although a causal relationship to belatacept could not be clearly established, it could also not be rejected. All patients who were currently on belatacept were switched to standard-of-care immunosuppression. Figure 4Patient disposition in LTE. HD, high dose; LD, low dose; LTE, long-term extension; MMF, mycophenolate mofetil. Based on these findings, a comprehensive assessment of the deaths observed in belatacept-treated patients was conducted, and a series of post hoc analyses were performed. Factors evaluated included demographic characteristics, donor age, baseline disease status, mean Model for End-Stage Liver Disease score, and belatacept trough concentrations. Deaths and graft losses were not evenly distributed among the centers. No demographic or disease-related risk factors were predictive of death associated with belatacept, and similarly, no clear explanation for the higher rates of death observed in the belatacept HD and LD groups was identified. Death and graft loss in the subset of patients who were HCV-positive at the time of transplantation were numerically higher in the belatacept groups at month 6 but comparable with the tacrolimus groups at month 12.\n\nBODY.DISCUSSION:\nThis phase II, randomized, multicenter study was the first clinical trial to explore the use of belatacept in LT recipients. The primary objective was to evaluate the safety and efficacy of belatacept relative to tacrolimus, as reflected by the incidence of AR, graft loss and death in de novo LT recipients. At the conclusion of this study, two of three belatacept groups had higher rates of death and graft loss relative to the standard-of-care control group tacrolimus + MMF. All three belatacept groups also had higher rates of AR, and there was an increase in viral and fungal infections, the majority of which were not serious. Two cases of PTLD and one case of PML were reported in belatacept-treated patients. The two belatacept HD groups had substantially better mGFR than the remaining three groups, while all three belatacept groups had better cGFR. There was also evidence of fewer neurotoxicity events in the belatacept groups. A high rate of discontinuation was observed in the belatacept treatment groups, such that by the time of the 12-month assessment, ∼40% of patients had discontinued assigned belatacept therapy. In contrast to the findings in renal transplant recipients, which showed comparable patient and graft survival relative to the CNI cyclosporine (13–17), this exploratory study in LT recipients demonstrated increased rates of death and graft loss in two of three belatacept groups compared with tacrolimus (different CNI comparator). Multiple factors may account for differences in survival outcomes between kidney transplant and LT recipients, including organ-specific aspects of alloimmune responses (24,25). In the present study, it is notable that basiliximab did not lead to a lower rate of rejection versus placebo in the two belatacept HD groups. There are several hypotheses, including the role of CD28-independent T cells and/or inhibition of regulatory pathways that could explain this observation (26). Further studies are required to understand the mechanism and contribution of these factors to AR in the setting of belatacept therapy. Important nonimmunologic differences include the degree of illness at the time of transplantation, extent of surgical trauma and massive perioperative fluid shifts. Doses of belatacept in this study were higher than those used in the kidney transplant trials, with the assumption that perioperative fluid shifts and fluid losses would necessitate additional belatacept. There was no association between trough belatacept levels and death or graft loss. This was expected, given that there is no exposure–response relationship in the kidney transplant setting, such that therapeutic drug monitoring is not needed with belatacept treatment. While preservation of renal function was a key finding in the pivotal phase III clinical trials of belatacept in renal transplant recipients, the magnitude of the renal function difference in the current study highlights the capacity for native kidneys to recover substantial function following LT in the absence of CNI exposure. The tendency toward more viral and fungal infections observed in belatacept-treated patients raises the possibility of overimmunosuppression in some patients, as observed with regimens that include MMF (27). This finding is somewhat paradoxical, however, given the higher rates of AR in the belatacept groups, which would suggest underimmunosuppression or immunomodulatory effects not yet understood in the context of LT. It is also possible that the management of AR placed patients at risk for subsequent infectious complications. The standard treatment for rejection may have been more intense than was needed in belatacept-treated patients, suggesting that overimmunosuppression may be part of the explanation for the infectious complications. The limited number of patients and events preclude the ability to definitively interpret the relative contributions of overimmunosuppression versus underimmunosuppression. LT recipients are an inherently complex population, with diverse and serious underlying medical concerns that have the potential to adversely affect posttransplant outcomes. Distinguishing the contribution of belatacept alone or an interaction between belatacept and such factors in the context of this phase II study is not feasible. An additional limitation of the study includes the open-label design (belatacept vs. tacrolimus assignments). Findings from this phase II study did not allow for the identification of a safe and effective dose or a regimen for further development of belatacept fulfilling the substantial need for nonnephrotoxic immunosuppressive therapy in LT recipients.\n\n**Question:** Compared to tacrolimus groups what was the result of belatacept groups on By month 12, the proportion surviving with a functioning graft?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
898
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Evaluation of the functional efficacy of an antioxidative probiotic in healthy volunteers\n\n ABSTRACT.BACKGROUND:\nIn persons without clinical symptom it is difficult to assess an impact of probiotics regarding its effect on health. We evaluated the functional efficacy of the probiotic Lactobacillus fermentum ME-3 in healthy volunteers by measuring the influence of two different formulations on intestinal lactoflora, fecal recovery of the probiotic strain and oxidative stress markers of blood and urine after 3 weeks consumption.\n\nABSTRACT.METHODS:\nTwo 3-week healthy volunteer trials were performed. Open placebo controlled (OPC) study participants (n = 21) consumed either goat milk or by L. fermentum ME-3 fermented goat milk (daily dose 11.8 log CFU (Colony Forming Units). Double blind randomised placebo controlled (DBRP) study participants (n = 24) received either capsules with L. fermentum ME-3 (daily of dose 9.2 CFU) or placebo capsules. The faecal lactoflora composition, faecal ME-3 recovery, effect of the consumption on intestinal lactoflora, and oxidative stress markers of blood (total antioxidative activity; total antioxidative status and glutathione red-ox ratio) was measured.\n\nABSTRACT.RESULTS:\nME-3 was well tolerated and a significant increase in total faecal lactobacilli yet no predominance of ME-3 was detected in all study groups. Faecal recovery of ME-3 was documented by molecular methods only in fermented milk group, however the significant improvement of blood TAA (Total Antioxidative Activity) and TAS (Total Antioxidative Status) indices was seen both in case of fermented goat milk and capsules\", yet glutathione re-ox ratio values decreased only in case of fermented by ME-3 goat milk.\n\nABSTRACT.CONCLUSION:\nThe functional efficacy of both consumed formulations of an antioxidative probiotic L. fermentum ME-3 is proved by the increase of the intestinal lactobacilli counts providing putative defence against enteric infections and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. In non-diseased host the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of host, e.g. markers of antioxidative defence system.\n\nBODY.BACKGROUND:\nProbiotics are defined as live microbial food supplements, which beneficially influence human health [1,2]. Widely accepted probiotics contain different lactic acid producing bacteria of human origin: bifidobacteria, lactobacilli or enterococci. Nowadays the concept of functional foods, incl. probiotic food and dietary supplements implies to their ability to beneficially influence body functions in order to improve the state of well-being and health and reduce the risk of disease [2,3]. The important areas of human physiology that are relevant to functional food science according ILSI (International Life Science Institute) and FUFOSE (The European Commission Concerted Action on Functional Food Science in Europe) are besides others, the modulation of basic metabolic processes and defence against high-grade oxidative stress [4,5]. Human nutrition is clearly associated with oxidative metabolism, which beside production of energy is involved in a number of vital functions of the host. For example, under physiological conditions the reactive species (including peroxyl radicals, nitric oxide radical, superoxide anion) figure a crucial role in primary immune defense of the human body by phagocytic cells against harmful microorganisms [6,7]. On the other hand, a prolonged excess of reactive species is highly damaging for the host biomolecules and cells, resulting in dysbalance of the functional antioxidative network of the organism and leading to substantial escalation of pathological inflammation [8]. By our knowledge, no systematic studies have been performed to approve the functional efficacy of different formulations of probiotic on the antioxidative defence system of a healthy human. In our previous study Lactobacillus fermentum ME-3 (DSM 14241) [9-11], expressed strong antimicrobial activity against Gram-positive and Gram-negative entero- and uropathogens [12,13]. The cells and cell lysate of L. fermentum ME-3 possessed substantial antioxidative potency [14]. In an animal experiment ME-3 suppressed the excessive oxidative stress reaction caused by Salmonella infection in intestinal mucosa and thus improved the gut mucosal antioxidative status [15]. The antioxidative effect of L. fermentum ME-3 on human body oxidative stress markers was confirmed by our pilot study with fermented goat milk [16]. The aim present study was to evaluate the functional efficacy of the probiotic strain L fermentum ME-3 in the human gastrointestinal tract (GIT) of healthy volunteers. The faecal recovery, effect of two different formulations on total faecal lactoflora and oxidative stress markers of blood and urine were compared after 3 weeks consumption.\n\nBODY.METHODS.FORMULATIONS:\nThe efficacy of two different formulations (experimental fermented goat milk and probiotic capsules) on the human body oxidative stress markers was evaluated. Lactobacillus fermentum ME-3, a probiotic strain of healthy human intestinal origin [17], has been identified by biochemical and molecular methods [9]. The patent application has been submitted to the Estonian Patent Agency (Application No. 0356/01PV) as well as to the International Bureau of World Intellectual Property Organization (WIPO) (Application No. WO03002131) [11]. L. fermentum ME-3 was used as freeze-dried powder in capsulated form and in fermented milk.\n\nBODY.METHODS.FORMULATIONS.CAPSULES:\nGelatine coated capsules were manufactured by the Tallinn Pharmaceutical Company. The freshly prepared probiotic capsules contained 9.0 logs CFU of L. fermentum ME-3 per capsule in addition to 250 mg of saccharose and microcellulose. Identical placebo capsules contained only saccharose and microcellulose. All capsules were stored at +4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF ME-3 IN CAPSULE:\nSurvival of ME-3 in capsule was monitored during 12 months at +4°C. The content of one capsule was dissolved aseptically in 2 ml of 0.9% NaCl solution. The suspension was vortexed, serially diluted and seeded 0.1 ml on de Man-Rogosa-Sharpe (MRS) agar medium (OXOID, U.K.) and incubated 48 hours at 37°C microaerobically (10% CO2). The number of colonies was counted and the viable cell count in capsule was calculated.\n\nBODY.METHODS.FORMULATIONS.EXPERIMENTAL FERMENTED MILK:\nThree different lots of experimental fermented goat milk was prepared for the 3-week trial with healthy volunteers in order to establish the health effects of ME-3 consumption. The study group was supplied with fresh product once a week. Experimental fermented milk was prepared as described previously [16] by combining the probiotic strain with two supportive lactobacilli cultures L. plantarum LB-4 and L. buchneri S-15. L. buchneri strain S1-5 decreased the specific taste of the goat milk. L. plantarum LB-4 was included as a strong producer of exopolysaccharides, which gives the fermented milk a cream-like consistence and delightful acidity. The goat milk was inoculated with 2% mixture of Lactobacillus strains and incubated at 37°C for 24 hours. Theproduct, ready to use, was cooled and stored at 4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF L. FERMENTUM ME-3 IN FERMENTED GOAT MILK:\nTo measure the viable cell count of ME-3 in fermented goat milk, samples were taken at the end of fermentation (before cooling the product), and after 24 h, 32 h, 48 h and 7 days from the preparation, when the product was stored at 4°C. The amount of 0.5 ml of the fermented milk was serially diluted in saline and plated on MRS agar medium and incubated for 48 h at 37°C in microaerobic conditions.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS:\nTwo healthy volunteer (n 45) trials, particularly open placebo controlled (OPC) study and double blind randomized placebo controlled (DBRP) study were carried on to evaluate the functional efficacy of L. fermentum ME-3 in the human body. The inclusion criteria included the wish to participate, no known health problems, and no medical conditions requiring drug therapy, no other yogurts or no special diets. The subjects with a history of GIT disease, food allergy and acute infection, use of any antimicrobial agent within the last month or use of any regular concomitant medication including antioxidant vitamins and anti-inflammatory non-steroidal drugs were excluded. The members of the trial were daily questioned about their general welfare, intestinal function (general welfare, gut gas production, stool frequency) and putative adverse effects. The withdrawal criteria from the trials included acute infections during the study. Reasons for dropout were the unwillingness to proceed with the study or relocation to new area. The blood samples (6 ml) from the antecubital vein, faecal and urine samples were collected before and at the end of all clinical trials. Participants of all trials gave informed consent to the study protocols approved by the Ethical Committee of Tartu University.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.OPEN PLACEBO CONTROLLED FERMENTED GOAT MILK TRIAL:\nThe study participants were 5 men and 16 women, mean age 50 years (range 35–60). During three weeks of the trial the study group (3 males and 13 females) consumed daily 150 ml fermented goat milk. The daily dose of probiotic Lactobacillus strain was 11.2 to 11.8 log CFU per person. The control group (1 male and 4 females) consumed the same dose of fresh goat milk.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.PROBIOTIC CAPSULE TRIAL:\nA DBRP study (ISRCTN 53164826) was carried out as follows. The study group consisted of 15 men and 9 women, mean age 52 years (range 40–60) allocated according to their wish to participate and randomly divided by an independent person and computer program for two groups. The study group members (8 males and 4 females) took three probiotic containing capsules (8.4 log CFU per capsule) two times daily (the daily dose 9.2 log CFU) during three weeks. The placebo group (7 males and 5 females) received identical capsules without the probiotic strain. Faecal samples of all participants to assess change in faecal lactoflora and the persistence of the ingested probiotic strain were collected before and at the end of trial. Several laboratory indices of blood and urine were measured before and after the consumption of ME-3. Here we report on changes in human body oxidative stress markers as total antioxidative activity (TAA), total antioxidative status (TAS) and glutathione red-ox ratio (GSH/GSSG) from blood serum.\n\nBODY.METHODS.MICROBIOLOGICAL ANALYSES OF FAECES:\nThe total count of lactobacilli and the count of L. fermentum were evaluated in faecal samples. The faecal samples were collected at day 0 and 21 in both trials. Samples were kept at -80°C before analyzed. Serial dilutions (10-2–10-9) of the weighed faecal samples were prepared with phosphate buffer (pH 7.2) and 0.05 ml of aliquots was seeded onto MRS agar medium [17]. The plates were incubated at 37°C for 4 days microaerobically in 10% CO2 environment (incubator IG 150, Jouan, France). The catalase negative colonies were selected on the basis of typical for LAB colony morphology, cells microscopy and Gram staining. The count of Lactobacillus species was expressed in log10 colony forming units per gram faeces (log10 CFU/g) and percentage (relative share) in the total count of lactobacilli. The detection level of lactobacilli was a 3.0 log CFU/g faeces. The relative amount of L. fermentum, colonizing the gastrointestinal tract of persons in the study groups was expressed as a proportion of the total count (%), using the Bioquant program [18]). The program gives output data for every microorganism as an absolute count (log10 CFU/g) and their percentage in the total count with its normal values.\n\nBODY.METHODS.AP-PCR TYPING:\nThe putative ME-3 isolates were typed by arbitrarily primed polymerase chain reaction (AP-PCR). Genomic DNA was extracted from 24 h old cultures, cultivated on MRS agar microaerobically with the QIAamp DNA Mini Kit 50 (QIAGEN GmbH., Hilden, Germany) according to the manufacturers instructions. AP-PCR typing was done with two primers: ERIC1R (5'-ATGTAAGCTCCT GGGGATTCAC-3') and ERIC2 (5'-AAGTAAGTGACTGGGGTGAGCG -3') (DNA Technology A/S, Aarhus, Denmark). A 30 μl volume of reaction mixture consisted of 10 × PCR buffer (Fermentas, Vilnius, Lithuania), 2.5 mM MgCl2 (Fermentas, Vilnius, Lithuania), 200 μM deoxynucleoside triphosphate mixture (dATP, dGTP, dTTP and dCTP, Amersham Pharmacia Biotech, Freiburg, Germany) 0, 60 μg of each primer and 2.5 U Taq DNA Polymerase (Fermentas, Vilnius, Lithuania,) and 5 μl of extracted DNA according to Matsumiya et al. [19]. The PCR mixture was subjected to thermal cycling 35 cycles of denaturation at 95°C for 1 min, annealing at 35°C for 1 min, and extension at 74°C for 2 min, with a final extension at 74°C for 5 min with the PTC-200 thermal cycler (Eppendorf AG, Hamburg, Germany). The PCR products were separated by electrophoresis in a horizontal 2% agarose gel containing 0.1 μl/ml ethidium bromide in Tris-acetic acid-EDTA (TAE) buffer (40 mM Tris, 20 mM boric acid, 1 mM EDTA, pH 8.3) (Bio-Rad Laboratories, Hercules, USA) at constant voltage of 120 V. A 1 kb ladder (GeneRuler, Fermentas, Vilnius, Lithuania) was used as a base pair size marker. The banding patterns of isolates were visualized with UV light and compared with that of L. fermentum ME-3 strain.\n\nBODY.METHODS.MEASUREMENT OF HUMAN BODY OXIDATIVE STRESS STATUS:\nBlood serum was analysed for total antioxidative activity TAA, total antioxidative status TAS and glutathione red/ox ratio (oxidized glutathione and reduced glutathione, GSSG/GSH). TAA of the serum was assessed by the linolenic acid test (LA-test) described previously [16]. This test evaluates the ability of the sample to inhibit lipid peroxidation. TAS of the serum was measured with a commercially available kit (TAS, Randox Laboratories Ltd. Ardmore, UK) as described elsewhere [16], water-soluble vitamin E (Trolox) serving as a standard. This method is based on the inhibition of the absorbance of the ferrylmyoglobin radicals of 2,2'-azinobis-ethylbenzothiazoline 6-sulfonate (ABTS+) generated by activation of metmyoglobin peroxidase with H2O2. The cellular oxidative stress markers as total glutathione and oxidized glutathione were measured using the method of Griffith [20] as described elsewhere [16]. The glutathione content was calculated on the basis of a standard curve generated with known concentration of glutathione. Amount of GSH (μg/ml) was calculated as a difference between the total glutathione and GSSG (total glutathione – GSSG). The glutathione red/ox ratio was expressed as GSSG/GSH.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe computer program Sigma Stat for Windows 2.0 (Jandel Corporation, USA) was applied. The counts of faecal lactoflora were compared by using Student's t-test and Mann-Whitney rank sum test. Changes in oxidative stress markers of blood sera (TAA, TAS and glutathione red-ox ratio) were evaluated by Student's t-test, paired t-test and Mann-Whitney rank sum test. The choice of tests was made automatically according to the distribution of the data. Both microbial and biochemical markers were given as mean and standard deviation. One-way ANOVA test was performed to compare the effect of different formulation on TAA, TAS and faecal lactoflora parameters. Differences were considered statistically significant if the value was p < 0.05.\n\nBODY.RESULTS.SURVIVAL OF ME-3 IN FORMULATIONS:\nIn capsule after approximately 1-log drop after one week from the production of the capsules, the viable count of the probiotic strain remained stable at the level of 8.4 log CFU per capsule. Additional results have shown at +4°C the stability of the freeze-dried capsulated culture at least 17 months from the production. In fermented goat-milk the cell count of the probiotic strain varied insignificantly from 9.0 to 9.7 log CFU/ml from one preparation to the other. The viable count of ME-3 in the fermented goat milk was found to remain stable at least during 7 days of storage at 4°C.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS:\nNo dropouts were registered during volunteer trials, yet one participant was withdrawn from the probiotic capsule trial due to acute respiratory viral infection. Besides, no adverse affects in general welfare or changes in GI functionality were assessed during the trial.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.CHANGES IN TOTAL LAB COUNT:\nThe consumption of both ME-3 fermented milk and ME-3 capsule significantly increased the total count of lactobacilli in faeces as compared to the initial levels (Fig. 1). In opposite, in the group of volunteers consuming non-fermented goat-milk there was even a decrease in total LAB counts during the 3-week trial and no changes were found in capsule placebo group. Figure 1Increase of total fecal counts of lactobacilli in healthy volunteers consuming of ME-3 in fermented goat milk and probiotic capsule. 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values (Student's t-test): * p < 0.05; Significantly different from control (Student's t-test): ‡ p = 0.01\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.RECOVERY OF THE PROBIOTIC STRAIN:\nIn goat milk group L. fermentum as a species appeared in fecal samples of all individuals (n = 16) after consumption of fermented goat milk (Table 1). The AP-PCR confirmed the recovery of ME-3 in the faeces of all study group members (Fig. 2). However, in different trials the administration of ME-3 strain did not lead to the predominance of Lactobacillus fermentum species (Table 1).In the probiotic capsule trial the strain ME-3 was not detectable between L. fermentum isolates by AP-PCR. Table 1 Changes in fecal recovery of L. fermentum during healthy human volunteer trials L. fermentum Groups * Prevalence (%) † Count (log10) ‡ Proportion (%) Day 0 Day 21 Day 0 Day 21 Day 0 Day 21 Goat milk trial, ME-3 (n = 16) 25 (4/16) 100 (16/16) 7.0 ± 0.7 7.3 ± 1.4** 21 13 Control (n = 5) - 20 (1/5) - 3.6 - 28 Capsule trial, ME-3 (n = 11) 16.7(2/12) 33.3 (2/12) 4.3 ± 0.5 5.8 ± 1.6 4 9 Placebo (n = 12) 25 (3/12) 16.7 (2/12) 6.3 ± 2.5 8.0 ± 1.6 11 19 * Percentage of subjects with fecal L. fermentum inside the group ** Significantly different from the pre-treatment values (paired t-test): p < 0.001 † Median value ± SD ‡ Proportion of L. fermentum among fecal LAB Figure 2Confirmation for the survival of L. fermentum ME-3 in GIT in subjects receiving ME-3 fermented goat milk by AP-PCR in a horizontal 2% agarose gel. a) From the left: M – DNA 1 kb Ladder, Line 1 -L. bervis ATCC 14869, Line 2 – L. buchneri ATCC 4005, Line 3 – L. reuteri DSM 20016, Line 4 – L. fermentum ATCC 14931, Line 5 – L. fermentum ME-3 b) From the left: M – molecular weight marker, Line 1 ...16 – ME-3 like profiles from feces of goat milk trial study group participants\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.ANTIOXIDATIVE HEALTH EFFECT OF ME-3:\nThe positive effect on the blood ox stress markers as TAA and TAS was seen in the case of both formulations (Fig. 3). Particularly, the additive increase in goat milk group was 6% and 9% respectively as compared to control group, however only 4% for TAA and 2.5% for TAS in probiotic capsule study group as compared to placebo. Figure 3Effect of ME-3 consumption in fermented goat milk and capsules on human blood oxidative stress markers a) TAA (%) and b) TAS(mmol/l). 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values: *p < 0.05 (paired t-test); **p ≤ 0.01 (Student's t-test and paired t-test);ME-3 goat milk effect different from the effect of the ME-3 in capsule-form (ANOVA): ‡p ≤ 0.001 The effect of goat-milk consumption on the TAA and TAS values was significantlyhigher (p < 0.001) than by the consumption of the capsulated probiotic (Fig. 3). The decrease of the glutathione red-ox ratio was significant in both groups: the study Group (from 0.15 ± 0.01 to 0.11 ± 0.04 μg/ml, p < 0.01) and control (from 0.14 ± 0.03 to 0.11 ± 0.02 μg/ml, p < 0.01) in the goat milk trial. When the probiotic was consumed in capsulated form, no significant decrease was noticed in the glutathione red-ox ratio.\n\nBODY.DISCUSSION:\nOur aim was to evaluate the functional efficacy of the antimicrobial and antioxidative probiotic L. fermentum ME-3 in normal population with variable food intake. First of all the safety of the L. fermentum strain ME-3 was confirmed as no adverse side effects were registered in volunteers. Even relatively high (>1011 CFU) doses of consumed ME-3 had no negative impact on the hosts' general well-being. Lactobacillus fermentum as species, used in various food applications, has a well-established history of safe use and is evaluated as GRAS according to the Food and Drug Administration of the USA [21]. Second, a clear improvement of laboratory indices of antioxidative defence system of a healthy host was documented, using both formulations as fermented by L. fermentum ME-3 goat-milk and probiotic capsules. This effect was simultaneous with the increase of intestinal lactoflora of healthy volunteers even without necessity for faecal recovery of the strain. In the human population, persons without clinical symptoms have still a quite different health status, including stability, capacity and potency of antioxidative defence to counteract sufficiently to oxidative stress-caused adverse effects [7]. If a probiotic is able to exhibit a positive functionality on oxidative stress-related indices, it helps both to stabilize and promote the potency of the whole body antioxidative defence system in subclinical situations without disease symptoms. That in turn may have an impact for lowering the risk of atherosclerotic damage of blood vessels associated with several cardiovascular and neurodegenerative diseases [22-24]. In our study of healthy volunteers for validation of the antioxidative functionality of probiotic, four well-known oxidative stress markers of blood were chosen [25-27]. The state of the lipid fraction (including also LDL) in the antioxidative defence system of the blood is evaluated by TAA. TAS on the other hand reflects more the antioxidativity of the water-soluble fraction of the human blood. Among the measured blood sera markers both the TAA and TAS values were also improved in the two different study groups. However, there was found a significantly lower improvement of TAA and TAS values in cases of capsule than fermented goat-milk where the recovery of the strains was assessed by AP-PCR. Similarly, the reduction of the glutathione red-ox ratio was detected after the consumption of fermented by ME-3 goat-milk but not with the capsule. The crucial non-enzymatic cellular antioxidant is GSH [28] present in the millimolar range mainly in the red blood cells, liver, pancreas, kidneys, spleen, eyes, lungs and intestinal cells [29]. The oxidized form of glutathione becomes even at low concentrations toxic, and therefore in the cells the glutathione red-ox ratio is kept as low as possible. In the case of inflammation this balance is shifted towards the oxidized form, indicating non-physiological intracellular oxidative stress. Thus, our study shows that there is a good association between the mode of formulation of probiotic and expression of its functional properties inside the healthy host. The antioxidative potential of the food supplement containing ME-3 was excellent, as reisolates of the strain from capsule expressed significantly higher TAA in comparison with the base values of the strain in vitro (data not shown). Unexpectedly, the shifts in the antioxidativity markers in blood serum of participants of the probiotic capsule trial were less pronounced in comparison with ME-3 fermented goat milk. Particularly, the explanation for more expressed positive shifts in oxidative stress markers of volunteers of the fermented goat milk trial could be due to the synergistic effect of the probiotic and the substrate. Milk is not just a carrier for the probiotic Lactobacillus strain, but contains natural \"lactogenic\" factors like lactose, minerals, vitamins and other components that enhance the metabolic activity of ingested probiotic strain in GIT. Both fermented goat milk and goat milk elevated the values of TAA and TAS The goat milk contains different biomolecules (e.g. casomorphins, lactorphins, casokinins, etc) having certain antioxidative properties, which can contribute to consumers' plasma antioxidative capacity [30-36]. This was proved by some antioxidative effect also in persons consuming non-fermented goat milk. However, the elevation these indices were remarkably more expressed in the fermented goat milk group, thus the goat milk fermentation with L. fermentum ME-3 results in additive increase of total antioxidativity. Therefore, the provisional FAO regulations [37] suggesting the need for health claims by specified formulations of probiotic seem to be of the utmost importance. Additionally, in our study with experimental fermented milk the average daily dose of L. fermentum ME-3 being 11.5 logs CFU was clearly higher than that of capsule (max 9.5 log CFU). It is possible that the dose excesses the amount of bacteria necessary for interacting with intestinal mucosa and the unattached lactobacilli are excreted with faeces. The finding of Saxelin and colleagues confirmed that the faecal recovery of the probiotic strain started from the consumption of more than 9.0 log CFU daily doses of capsulated LGG [38]. To our surprise, in the present study the similar dose did not result in faecal recovery of the strain. It is possible that the ME-3 strain germinated mainly in some upper parts of intestinal tract where the advantageous conditions for survival and metabolic activity of probiotic lactobacilli were present. Using molecular tools, Marteau et al. showed that lactobacilli figuring only 7% of faecal microflora performed up to 30% of microbial communities in human colon [39]. If administered in lower quantities as in case of capsule trial, ME-3 did not reach the detectable level in faecal samples. Yet, its presence in gut was proved by the positive antioxidative health effect in blood but not in urine. Therefore it is understandable that the higher load of metabolically active probiotic bacteria in goat-milk resulted also in their faecal recovery and the highest impact on the oxidative stress indices. Moreover, in our study the positive impact of ME-3 consumption on the host lactoflora was proved by the increase of faecal lactobacilli counts in all participants of human volunteer studies. In experimental settings the high counts of intestinal lactobacilli have been shown as an important defensive factor against enteric infections [40,41]. Though up to now the period of consumption of probiotics has not been defined, the 3-week ingestion of fermented goat-milk and capsule seemed enough for reaching the aims. It is important to mention that after consumption of ME-3, a strain with high antagonistic activity, neither the species nor the strain predominated among total lactoflora. This shows a well-granted microbial balance inside the gut, which cannot be disturbed by high load of probiotic bacteria. Apparently, the interconnected advanced metabolism of large gut microbiota keeps the proportions of different species quite stable. Some other investigators have obtained similar results showing the proportional increase of different microbial populations (bifidobacteria, coliforms) after administration of Lactobacillus sp. probiotic [42,43]. Thus, the functional efficacy of different formulations of anti-infectious and antioxidative probiotic L. fermentum ME-3 were proved both by the increase of the lactobacilli counts providing putative defence against infectious agents in gut and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. Further, studies evaluating the efficacy of ME-3 as adjunct to conventional therapy in patients with atherosclerotic damages and a high-grade oxidative stress are ongoing.\n\nBODY.CONCLUSION:\nIn non-diseased host, the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of organism. In our case, the possibility for augmentation of the antioxidative defence system by the probiotic L. fermentum ME-3 in normal population can be proposed.\n\nBODY.COMPETING INTERESTS:\nMarika Mikelsaar, Mihkel Zilmer, Tiiu Kullisaar, Heidi Annuk (Hynes) and Epp Songisepp are sharing the Estonian patent application: no. EE 2001 00356 29.06.01 and International Patent application: no. WO03002131.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nEpp Songisepp, and Pirje Hütt have been in charge of the microbiological analysis. The former has been also in charge of analyzing the results and writing of the manuscript. Jaak Kals was responsible for performance and management of the volunteer trials. Tiiu Kullisaar has been in charge of the biochemical analysis and writing the manuscript. Mihkel Zilmer has conducted the biochemical estimations and writing the manuscript. Reet Mändar has been in charge of the molecular analysis and revising the manuscript. Marika Mikelsaar is the main conductor of the L. fermentum ME-3 research; for this paper she has been in charge of the clinical trial design and writing the manuscript.\n\n**Question:** Compared to Placebo what was the result of Capsule trial, ME-3 on glutathione red-ox ratio?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
901
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Safety of a Lidocaine Gel in Patients from 6 Months up to 8 Years with Acute Painful Sites in the Oral Cavity: A Randomized, Placebo-Controlled, Double-Blind, Comparative Study\n\n ABSTRACT:\nLidocaine is a well-accepted topical anaesthetic, also used in minors to treat painful conditions on mucosal membranes. This randomized, double-blind, placebo-controlled study (registered prospectively as EudraCT number 2011-005336-25) was designed to generate efficacy and safety data for a lidocaine gel (2%) in younger children with painful conditions in the oral cavity. One hundred sixty-one children were included in two subgroups: 4–8 years, average age 6.4 years, treated with verum or placebo and 6 months–<4 years, average age 1.8 years, treated only with verum. Pain reduction was measured from the time prior to administration to 10 or 30 minutes after. In addition, adverse events and local tolerability were evaluated. In group I, pain was reduced significantly after treatment with verum compared to placebo at both time points. In group II, the individual pain rating shift showed statistically significant lower pain after treatment. Only seven out of 161 patients reported an adverse event but none were classified as being related to the study medication. The local tolerability was assessed as very good in over 97% of cases. For painful sites in the oral cavity, a 2% lidocaine gel is a meaningful tool for short-term treatment in the paediatric population.\n\nBODY.1. INTRODUCTION:\nThe use of medicinal products in younger children, even if approved and accepted for decades in adults and older children, often takes place with some uncertainties on the parents' or physician's site. Younger patients not only are smaller and have less weight but also have at least partly different pharmacokinetic characteristics [1]. Although for topically applied and locally acting medicinal products the potential negative consequences of use are considered less important, controlled in-use-data is highly appreciated. Clinical trials according to accepted rules represent the highest standard of clinical evaluation to prove efficacy and safety but are difficult to perform in younger patients for various reasons [2]. Ethical aspects are more critical and also the design of a study is more demanding, for example, choice of reliable efficacy criteria, especially if laboratory tests or noninvasive measurements are not suitable. Pain perception shows a high degree of individual variation in intensity, onset, and degree of toleration [3]. In patients not reliably able to specify pain perception, due to their age, it can be very difficult to measure whether a pain medication was successful, especially when the pain is mostly moderate and local. Lidocaine is a well-established and approved local anaesthetic alone and in combination used in a variety of topically applied preparations, mostly as medicinal product [4]. It is also used to treat painful oral conditions in children, like aphthous ulcers, blisters, teething, or gingivitis. The objectives of this randomized, double-blind, placebo-controlled, phase IV study were to determine the efficacy and safety of a gel with 2% lidocaine in children from 6 months up to 8 years with painful conditions in the oral cavity as a meaningful tool for short-term treatment in the paediatric population. The verum preparation used is available in Germany for short-term, symptomatic treatment of pain on the oral mucosa, on the gums, or on lips. The dosing recommendation for children is up to 4 times daily as a pea size piece of gel (approximately 0.2 g gel or 4 mg lidocaine hydrochloride).\n\nBODY.2. PATIENTS AND METHODS:\nThis randomized, double-blind, placebo-controlled, single-centre, phase IV clinical trial (EudraCT number 2011-005336-25) was initiated after approval by the responsible ethics committee of the Medical Chamber of the German Federal State of Thuringia and by the German National Health Authority (BfArM). The study was conducted in accordance with the World Medical Association Declaration of Helsinki 2013, the Guidelines of ICH Good Clinical Practice (GCP) as well as the requirements of German national drug and data protection laws. All patients and their parents or legal guardians were given sufficient time and necessary information to consider the benefits and risks of involving the children in the clinical trial before written informed consent was obtained and before assent was sought from the child. Two different age groups (I: 4–8 years and II: 6 months–<4 years) were included. Children with painful site/s in the oral cavity aged 4 years or older were randomly assigned to be treated either with verum or with placebo, whereas children younger than 4 years were only treated with verum.\n\nBODY.2. PATIENTS AND METHODS.2.1. STUDY MEDICATION, DOSAGE, AND ADMINISTRATION:\nThe 2% lidocaine gel (commercially available as Dynexan Mundgel, Kreussler Pharma, Germany) and the placebo preparation (identical composition without lidocaine, both products filled in identical 20 mg tubes to guarantee blindness) were locally applied once by or under supervision of a member of the study staff to the painful site/s as a pea size amount of gel (0.2 g gel) corresponding to 4 mg lidocaine hydrochloride in case of verum. The whole study took place at a clinical trial facility, specialized in conducting clinical studies with children. Different patient recruitment arrangements, like advertisements in the local newspaper, referrals by paediatricians, and distribution of flyers, were performed to enlarge the number of suitable patients, after obtaining approval by the ethics committee and health authority.\n\nBODY.2. PATIENTS AND METHODS.2.2. PAIN MEASUREMENT:\nThe painful sites were defined as open breaks of the oral mucosa, lips, or tongue as a symptom of a variety of mild diseases, or trauma, for example, bites, aphthous ulcer, mouth blisters, cold sores, partial gingivitis, hand-foot-mouth disease, herpes simplex, teething, and trauma from dental braces. The primary efficacy variable was the pain reduction from T1 (prior to administration) to T2 (10 ± 5 min after application) as assessed by the Wong-Baker FACES Pain Rating Scale (Figure 1). This scale is a self-report measure used to assess the intensity of children's pain. The rating scale was used, because of its suitability for children from 3 years of age on as well as its appropriateness in pre- or nonverbal children. Children were instructed to indicate their pain by pointing to one of the faces. If the child was not old enough and/or not able to use this scale, the assessment of the children's pain by the parents on the same scale was used instead. A secondary efficacy variable was pain reduction from T1 (prior to administration) to T3 (30 ± 10 min after application, Wong-Baker FACES Pain Rating Scale). This \"mixed\" analysis of children's and parents' assessments was performed to consider the child's assessment as often as possible according to the protocol and statistical analysis plan (SAP). To investigate whether this approach was reliable, an additional analysis was performed with assessments of the same person (either child, if available from T1 and T2 (or T3), or parent, if the child's assessment was missing at one or two points of time). Additionally, assessment of satisfaction (5-point verbal rating scale with \"very unsatisfied,\" \"somewhat unsatisfied,\" \"slightly satisfied,\" \"satisfied,\" and \"very satisfied\") and assessment of local tolerability by the investigator (\"poor,\" \"moderate,\" \"good,\" and \"very good\") were evaluated.\n\nBODY.2. PATIENTS AND METHODS.2.3. SAMPLE SIZE ESTIMATION:\nSample size calculation was performed within a parametric test scenario. In age group I, a sample number of 64 subjects per treatment arm was considered sufficient to obtain reliable data (assumptions: difference to be detected: one scale point, alpha-level: 0.05, power: 80%, and standard deviation: two scale points). For age group II, a sample size of 32 patients was chosen. In order to recruit 160 evaluable patients, about 222 subjects were planned to be screened, due to an assumed screening failure rate of 15% and a drop-out rate of 15%.\n\nBODY.2. PATIENTS AND METHODS.2.4. STATISTICAL ANALYSES:\nThe statistical analysis was conducted in compliance with the ICH E9 Note for Guidance on Statistical Principles for Clinical Trials. The efficacy variables were analysed by using the Mann-Whitney U test (differences between groups) and the Wilcoxon test (differences within groups, changes over time), whereby the null hypothesis of no difference between both groups was tested. To evaluate the robustness of results, additional analyses were applied (ANCOVA with baseline values as covariable and sensitivity analyses with several variants of assessments, e.g., using parents' instead of children's assessments). All analyses were performed using Statistical Analysis System software, version 9.2 or higher (SAS Institute Inc., Cary, NC, USA).\n\nBODY.3. RESULTS:\nStudy data were collected between May 21st, 2012, and June 14th, 2014, from 129 patients (age group I) and additional 32 patients (age group II). Within age group I, 63 patients were treated with 2% lidocaine gel (verum) and 66 patients with placebo. No patient dropped out of the study (Figure 2).\n\nBODY.3. RESULTS.3.1. BASELINE PATIENT CHARACTERISTICS:\nIn age group I, 54 male (verum N = 30, placebo N = 24) and 75 female subjects (verum N = 33, placebo N = 42) were included. In age group II, both genders were equally distributed (16 male and 16 female patients). The average age in group I was 6.4 ± 1.4 years, ranging from 4.0 to 8.9 years. In age group II, the mean average age was 1.8 ± 0.9 years, ranging from 6 months to 4.0 years. Except for one Black and one Latvian-Korean patient, all other patients were Caucasians. In age group I aphthous ulcer (36.0%) and in age group II teething (63.6%) were the main cause of painful sites (Table 1). The mean average size of the painful sites was 19.9 ± 15.7 mm2, ranging from 5 mm2 to 150 mm2. Gingiva and/or oral mucosa were affected in the majority of cases (age group I: 92.7%, age group II: 81.6%), whereas lips or tongue were involved in the remaining cases. The parents in age group I often reported grouching (20.2%), changes in behaviour (24.1%), and poor feeding (26.0%), whereas in age group II weeping (14.2%), crying (13.5%), grouching (18.4%), and sleepiness (18.4%) were often reported.\n\nBODY.3. RESULTS.3.2. PAIN ASSESSMENT:\nThe self-rating by the children of age group I could be used in 107 cases (verum N = 53, placebo N = 54) at T1 (prior to administration) and in 109 cases (verum N = 57, placebo N = 52) at T2 (10 ± 5 min after application). The assessments by parents were used in 22 cases at T1 and in 20 cases at T2, because of children's lack of ability to rate their pain properly or showing signs of unwillingness or lack of concentration. For the secondary efficacy parameter (change of pain rating from T1 to T3 (30 ± 10 min after application)), the self-rating by the children was analysed in 108 patients (verum N = 56, placebo N = 52) at T3. The assessments by parents were used for 21 patients at T3.\n\nBODY.3. RESULTS.3.3. EFFICACY RESULTS:\nOn average (median) pain has been reduced by 2.0/5 (verum) versus 1.2/5 (placebo) (Table 3). Nonparametric analysis (Mann-Whitney U test) of treatment related difference in pain assessment yielded a statistical significance (p < 0.001) of the observed effect in favour of the 2% lidocaine gel (verum). A similar result (p < 0.002) was achieved also for the measurement of pain reduction from T1 to T3. These results were reassured also if the assessment of the same person (either child or parent) was evaluated at both endpoints. The children's satisfaction as assessed by parents on a 5-point verbal rating scale one hour after administration tends to yield a higher proportion of verum patients showing satisfaction, although observed differences compared to placebo arm did not reach a statistical significance at α = 0.05 level (χ\n2-Test including age group II subjects: p value = 0.060; χ\n2-Test age group I only: p value = 0.259). Age group II (aged <4 years) patients were all treated with lidocaine 2% gel (verum). In order to assess the efficacy, the individual shift in pain rating had been evaluated. There were no patients with a worsening of pain, 4 patients (12.5%) without a change, 13 patients (40.6%) with an improvement of one, 7 patients (21.9%) of two, 5 patients (15.6%) of three, and 3 patients (9.4%) of four categories in pain rating on T2 compared to T1 (Table 3). This improvement in pain rating was even more pronounced at T3, with 16 patients (50%) indicating \"No Hurt,\" 13 patients (40.6%) indicating \"Hurts Little Bit,\" and 3 patients (9.4%) indicating \"Hurts Little More\" (Table 2). The observed ratings consequently showed statistically significant reduction of pain (Wilcoxon signed rank test results; T2: p value < 0.0001 and T3: p value < 0.0001).\n\nBODY.3. RESULTS.3.4. SAFETY EVALUATION:\nSeven out of 161 patients (4.4%) reported in total 7 adverse events (worsening of hand-mouth-foot disease, stomach ache, minimal lesion of the upper lip after a fall, bronchitis, single loose bowels, stabbing pain mouth due to aggravating hand-mouth-foot disease, and laceration of the gingiva due to a fall). Two adverse events were registered in each treatment arm in age group I and three adverse events in age group II after verum application (no placebo patients treated). None of these adverse events were assessed as related to study drug. For most patients (post study examination: 97.5%; one day after the study day: 98.1%) the local tolerability has been rated as \"very good.\"\n\nBODY.4. DISCUSSION:\nIn general, clinical studies with younger patients are more difficult to conduct not only because of the regulatory situation, but also because of the lower motivation of the children and their parents to participate and less ability of the young patients to understand the study principles [2, 5]. For minor to moderate and usually short-lasting pain situations, the psychological strain is weak, which affects the willingness to participate in a placebo-controlled clinical study, because the benefit seems not to be substantially relevant. Pain perception in children as a primary efficacy criterion is ambitious and the choice of a suitable method is important [6–8]. Most children aged 4-5 years or older can provide meaningful self-reports of pain intensity, depending on training and provision with age-appropriate tools. Children younger than 4-5 years have a tendency to use only the extremes of a scale. Other response sets or biases include giving the same answer to all questions and responding in an upward or downward sequence to successive questions. Pain rating may be affected by their perception of the consequences of the rating, for example, reward for being brave and fear of getting a syringe. Many other cognitive, social, and cultural factors influence pain ratings (e.g., previous experiences, temperament, presence of an audience, and family and community norms for pain expressions) making them idiosyncratic and difficult to interpret [6]. The assessment of efficacy criteria in studies investigating moderate pain relies on the self-assessment of pain. Patient independent criteria, like lab test or ultrasound, cannot be used. Topically applied locally acting medicinal products are more difficult to investigate than systemically acting products. Self-healing properties and placebo effects are presumably higher. For local skin or mucous membrane defects, the placebo formulations by themselves may have a healing or protecting effect. To solve these and other obstacles, we performed this study in a clinical trial facility and not in private practices, made sufficient patient enrolment possible with a variety of recruitment arrangements, excluded younger patients from the possibility to receive a placebo, and used an accepted pain rating scale for children, the Wong-Baker FACES Pain Rating Scale [9]. The participating children were trained and asked by the blinded study personnel in a language the child is familiar with. Additionally, to substitute and fortify the child assessment, the parents or legal guardians were asked to indicate on the same scale, how they consider the child's pain. More than 80% of children in group I (4–8 years) were able to rate pain by themselves whereas for the remaining children self-reporting could not be used. It is interesting that on average children in this age group rated their pain slightly lower than their parents. However, it should be realized that comparing the child's face with the expression on the faces picture is indirect rating and may be sensitive to bias. As expected and anticipated in age group II mostly only the parents' assessment of how they consider the level of the children's pain before and after administration could be evaluated. These children were too young to allow the use of the pain scale. The medicinal product used in this study is authorized and on the market in Germany and France for many years. In Germany, it can be applied to children without age restriction. In a recently published clinical study in patients between 6 years and 15 years with pain because of clamp placement, oral trauma, or aphthous ulcers, the identical medicinal product (also due to regulatory circumstances called a cream in France) showed significant superiority compared to the respective placebo formulation [10]. In the present study, significant superiority could be measured in children even down to 4 years of age. Additionally descriptive efficacy and safety data in children between 6 months and below 4 years were obtained, including patients with teething pain, one of the major sources of oral pain in children up to 2 years of age. Lidocaine alone or in combination with another local anaesthetic is a well-known product for local pain reduction [11, 12]. In different clinical studies conducted with children, the efficacy of topically applied lidocaine could be demonstrated [13–16]. But not in every study could the expected positive outcome be achieved, like in a recent study with children from 6 months up to 8 years suffering from acute infectious ulcerative mouth conditions analysing the effect of a 2% lidocaine viscous solution in comparison to a placebo on the amount of ingested fluid within 60 minutes of administration [17]. At least such an efficacy criterion seems questionable to judge the effectiveness of lidocaine. Seven out of 161 subjects (4.4%) reported in total 7 adverse events (in age group I 2 adverse events in the verum and the placebo arm each, in age group II 3 adverse events after verum treatment and no patients treated with placebo). None of these adverse events were assessed as study drug related, clearly demonstrating the safety of the product.\n\nBODY.5. CONCLUSIONS:\nThe 2% lidocaine gel, as successfully tested in our clinical trial in comparison to a placebo gel with the same composition without lidocaine, is an adequate tool to reduce or prevent pain on the oral mucosa or gingivae, like bites, aphthous ulcers, mouth blisters, cold sores, gingivitis, hand-foot-mouth disease, dental braces, or teething in younger children.\n\n**Question:** Compared to placebo what was the result of lidocaine gel (2%) on On average (median) pain in the paediatric population?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
903
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A dose-ranging study of the bronchodilator effects of abediterol (LAS100977), a long-acting β\n\n ABSTRACT.BACKGROUND:\nLong-acting β2-adrenergic agonists (LABAs) are recommended in combination with inhaled corticosteroids (ICSs) for asthma management. Abediterol is a novel, selective, potent, once-daily LABA in development for treatment of asthma and chronic obstructive pulmonary disease. This study aimed to determine abediterol doses with similar peak bronchodilatory effect to salbutamol 400 μg, and duration of action compatible with once-daily dosing in patients with persistent, stable asthma.\n\nABSTRACT.METHODS:\nThis was a Phase II, randomized, double-blind, double-dummy, crossover, placebo-controlled, dose-ranging study (ClinicalTrials.gov NCT01425801) in 62 patients with mild-to-moderate asthma who were also receiving an ICS. Patients received single doses of abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg, or placebo in the morning. Spirometry was performed up to 36 h post-dose; safety and tolerability were assessed throughout the study. The primary endpoint was change from baseline in peak forced expiratory volume in 1 s (FEV1). Additional endpoints included trough FEV1, normalized area under the FEV1 curve (FEV1 AUC) up to 24 h post-dose, and peak and trough forced vital capacity (FVC).\n\nABSTRACT.RESULTS:\nAbediterol produced dose-dependent improvements in peak FEV1 from baseline compared with placebo, from 0.274 (95% CI 0.221, 0.327) to 0.405 L (95% CI 0.353, 0.458) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001 all doses). Abediterol 0.625, 1.25, and 2.5 μg had similar magnitude of peak FEV1 effect to salbutamol. Dose-dependent changes from baseline in trough FEV1 versus placebo were 0.219 (95% CI 0.136, 0.302) to 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001). All abediterol doses achieved significant improvements versus placebo in FEV1 AUC 0–6, 0–12, and 0–24 h, and peak and trough FVC (p < 0.05). Less than 10% of patients experienced treatment-related adverse events for each dose of abediterol; most were mild to moderate in intensity and the most common were headache and nasopharyngitis. There were no clinically relevant changes in heart rate.\n\nABSTRACT.CONCLUSIONS:\nAbediterol 0.625–2.5 μg provided dose-dependent, clinically and statistically significant bronchodilation versus placebo in patients with asthma, with a peak effect similar to salbutamol and duration of action compatible with once-daily dosing. All doses of abediterol were well tolerated.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/1471-2466-14-176) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nThe use of β2-adrenergic agonist bronchodilators in combination with inhaled corticosteroids (ICSs) to manage the symptoms of asthma is recommended by Global Initiative for Asthma (GINA) guidelines [1]. Currently licensed bronchodilators include salbutamol, a short-acting β2-adrenergic agonist (SABA) administered as required [2], salmeterol and formoterol, two long-acting β2-adrenergic agonists (LABAs) administered twice daily in combination with an ICS [3, 4], and the recently licensed vilanterol, a LABA administered once daily in fixed-dose combination with the ICS fluticasone furoate [5]. There is ongoing interest to develop once-daily medications to further simplify treatment regimens and improve patient compliance [6]. Abediterol is a novel, once-daily LABA in clinical development as a fixed-dose combination with an ICS for the treatment of asthma and chronic obstructive pulmonary disease (COPD) [7]. Abediterol displays high affinity for β2-adrenoceptors, plus a higher functional selectivity for β2-adrenoceptors over β1-adrenoceptors than formoterol, indacaterol, vilanterol and olodaterol in cellular models [7–9]. The potency of abediterol is also higher than salmeterol and indacaterol in isolated human bronchi and, in addition to these two LABAs, vilanterol and olodaterol in animal models [7, 10]. Abediterol has a rapid onset and prolonged duration of action [7, 10]. A Phase II clinical trial in patients with asthma provided results that were consistent with the preclinical data; abediterol resulted in significant improvements in lung function at 5 min post-dose that were greater than improvements seen with salmeterol, and significant bronchodilation was sustained up to 24 h post-dose [11]. These characteristics make abediterol an ideal candidate for once-daily dosing. Moreover, abediterol has a favorable cardiovascular safety and tolerability profile: in anesthetized dogs, abediterol had a lesser effect on heart rate compared with salmeterol, formoterol, and indacaterol at concentrations that produced comparable bronchodilation [7, 12]. In early phase clinical trials, doses of abediterol ≤10 μg were safe and well tolerated and did not demonstrate any clinically relevant effects on heart rate in healthy subjects [13], patients with asthma [14], or patients with COPD [15]. Drug regulatory authorities have concerns that excessively high LABA doses may cause unacceptable side effects [16]. The trough forced expiratory volume in 1 s (FEV1), measured 24 h after dosing, is an important endpoint to assess once-daily LABA efficacy [16]. The maximum effect achieved soon after dosing, otherwise known as the 'peak' effect, is also of interest for efficacy and safety reasons. A very high peak effect may indicate high levels of acute drug exposure that could predispose patients to β-agonist-mediated side effects, such as cardiovascular adverse events, when receiving chronic treatment [17]. Moreover, the ideal pharmacologic characteristic of a once-daily LABA is a low 'peak to trough ratio', ensuring relatively stable lung function throughout the day [18]. The peak effect of salbutamol is well characterized, and causes acute bronchodilation without safety concerns. The bronchodilator response following single-dose administration of salbutamol (4 x 100 μg) is commonly used as one of the diagnostic tests for asthma, as recommended by a joint European Respiratory Society (ERS)/American Thoracic Society (ATS) taskforce [19]. This effect is, therefore, a reasonable benchmark for a novel LABA to achieve and this degree of acute bronchodilation is considered clinically meaningful. A major regulatory agency recommended that the peak effects of abediterol should be compared to those of salbutamol to guide dose selection for further clinical trials in order to avoid a dose that induces a much greater degree of bronchodilation than is necessary for clinical effectiveness, which could increase the potential for safety concerns during chronic treatment. We report a placebo-controlled, dose-ranging, crossover study comparing the acute bronchodilator effects of abediterol to salbutamol in patients with persistent stable asthma. The primary aim of the study was to investigate the peak bronchodilatory effect of abediterol versus placebo and to confirm it was in the same range as that of salbutamol. The full profile of lung function over 24 h, including the trough FEV1, was also measured. The goal of this study was to enable the selection of abediterol doses for future long-term clinical trials, based on a peak effect similar to that of salbutamol 400 μg metered dose inhaler (MDI), coupled with a trough effect compatible with a 24-h duration of action. This trial design was a specific recommendation by a major regulatory agency.\n\nBODY.METHODS.STUDY DESIGN:\nThis was a Phase IIa, randomized, double-blind, double-dummy, 6-way crossover, single-dose, multicenter, dose-ranging study conducted between August 2011 and January 2012. The study was conducted in two sites in the UK and seven in Germany. The study protocol was approved by an independent ethics committee in each country (see Additional file 1) and complied with the Declaration of Helsinki and the International Conference on Harmonisation and Good Clinical Practice guidelines. Patients provided written, informed consent. Following a screening visit and a 12–16 day run-in period to assess clinical stability, participants were randomized to 1 of 6 treatment sequences according to a balanced Williams design for crossover studies (1:1:1:1:1:1). Block randomization was performed by the Sponsor, using a computer-generated schedule to assign a treatment sequence to each patient randomization number. The block size and randomization schedule were not communicated to investigators. Patients received abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg (administered as four inhalations with 100 μg per inhalation), or placebo (at 09:00 h ±1 h) at each visit, with a washout period of 7–14 days. A follow-up telephone call to assess safety was made approximately 2 weeks following the final treatment visit or premature discontinuation. Abediterol and placebo to abediterol were administered via a multidose dry powder inhaler (Genuair®). Salbutamol and placebo to salbutamol were administered via a pressurized MDI (VentolinTM EvohalerTM). In order to preserve study blinding, independent, trained personnel prepared the medication, delivered the device to patients, and collected the device following inhalation.\n\nBODY.METHODS.PATIENTS:\nEligible participants were men or women aged 18–70 years with a clinical diagnosis of persistent asthma (according to GINA guidelines) for ≥6 months prior to screening. Eligible patients had an FEV1 >60% and ≤85% of the predicted normal value, with an FEV1 reversibility ≥12%, and an absolute increase in FEV1 of ≥200 mL over their baseline value following inhalation of salbutamol 400 μg at screening. Comparable FEV1 measurements were required at screening and pre-dose at each visit in order to ensure there were no carry-over effects from previous treatment periods (limit of variability ±20%). At enrollment, all patients were using only an ICS to control their asthma, and had been on a stable dose and regimen for at least 4 weeks prior to screening, up to the equivalent of 1600 μg/day of beclometasone dipropionate, and had not previously been exposed to abediterol. Exclusion criteria included: a smoking history within the past 6 months or >10 pack-years; presence or history of relevant pulmonary disease or any other clinically relevant disease or abnormality as judged by the investigator; difficult-to-treat asthma; any respiratory tract infection ≤6 weeks prior to screening; hospitalization or emergency room treatment for asthma ≤3 months prior to screening; treatment with β2-antagonists; and positive laboratory tests for illicit drugs. The use of anticholinergics, SABAs (other than salbutamol), LABAs, continuous oral or parenteral corticosteroids, methyl-xanthines, cromolyn sodium, nedocromil, leukotriene modifiers, anti-immunoglobulin E, β1-blocking agents, and other investigational drugs was prohibited during the study. Patients using prohibited medications underwent up to 6 weeks' washout before the screening visit. Salbutamol pressurized MDI (100 μg/puff) was the only reliever medication permitted and was used on an as-needed basis and avoided 6 h prior to and during a treatment visit unless absolutely necessary.\n\nBODY.METHODS.EFFICACY ASSESSMENTS:\nPulmonary function tests were conducted at screening, pre-dose, and at 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, 14, 23, 24, and 36 h post-dose at each treatment visit using standardized spirometers at each investigational site (eResearchTechnology, Philadelphia, PA, USA). Prior to the first spirometry measure, the technician demonstrated the procedure and allowed the patient two practice attempts. At each study time point, the maneuvers were repeated until three technically adequate measurements were made according to ERS/ATS guidelines for acceptability and repeatability [20], with a maximum of eight attempts allowed.\n\nBODY.METHODS.SAFETY AND TOLERABILITY ASSESSMENTS:\nTreatment-emergent adverse events (TEAEs), including serious adverse events, were recorded throughout the study and follow-up period. A physical examination was performed at screening and at 36 h post-dose following the last treatment period (Visit 6), and blood pressure and 12-lead electrocardiogram measures were made at screening, pre-dose, and at 0.75, 2, 6, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6. Clinical laboratory tests (blood chemistry, hematology, urinalysis, and a serum pregnancy test) were performed at screening and at 36 h post-dose at Visit 6. Blood glucose and serum potassium levels were assessed at screening, pre-dose, and at 4, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6.\n\nBODY.METHODS.STATISTICS:\nThe primary efficacy endpoint was change from baseline in peak FEV1 (defined as the highest FEV1 value observed within 4 h of the administration of the investigational medicinal product [IMP]) on Day 1. Secondary endpoints included change from baseline in peak forced vital capacity (FVC) on Day 1, trough FEV1 (defined as the highest average of the maneuvers performed at 23–24 h post IMP administration) and FVC on Day 2, change from baseline in FEV1 and FVC at each time point up to 36 h post-dose, and change from baseline in normalized FEV1 area under the curve (AUC; defined as the area between baseline FEV1 and the FEV1 curve, from 0 to × hours divided by × hours) at 0–6, 0–12, and 0–24 h post-dose. In accordance with the GINA guidelines, a change in FEV1 of 200 mL was considered clinically relevant [1]. A population of 48 patients was necessary to achieve 80% power to detect a 150 mL change from baseline in peak FEV1 for any dose of abediterol compared with placebo. Assuming a 15% withdrawal rate, the protocol required at least 54 patients to be randomized. All reported efficacy assessments were analyzed in the intention-to-treat population, defined as all randomized patients who took at least one dose of the IMP (placebo, abediterol, or salbutamol) and had at least one baseline and one post-dose FEV1 value for at least one treatment visit. Safety analyses are reported in the safety population, which consisted of all randomized patients who took at least one dose of the IMP. All statistical analyses were performed using Statistical Analysis System (SAS Institute Inc., Cary, NC, USA) version 9.1.3 software. Efficacy variables were analyzed using an analysis of covariance model for crossover designs, with fixed-effect factors of sequence, treatment, and period, patient within sequence as a random effect, and baseline FEV1 or FVC as a covariate, as appropriate. Between-treatments comparisons were performed by means of contrasts on the treatment factor. The difference between treatments was estimated by the difference between the least squares means. Statistical comparisons between active treatments and placebo were two-sided hypothesis tests with a significance level of 0.05.\n\nBODY.RESULTS.STUDY POPULATION:\nA total of 115 patients were screened, with 62 patients randomized (40/62 [64.5%] male, mean age 40.9 ± 11.2 years). Patient demographics are presented in Table 1. A diagram of patient flow is presented in Figure 1. All 62 patients were included in the intention-to-treat and safety populations. One protocol violation occurred when a patient was inadvertently given the treatment scheduled for Visit 6 at Visit 5, and vice versa. Data for this patient were analyzed according to the treatments received. Four patients withdrew or were discontinued from the study: two due to inability to comply with the study schedule, one who did not fulfill stability criteria, and one for unknown reasons (patient did not return for the premature discontinuation visit).Table 1\nPatient demographics at baseline: safety population\nPatients (N = 62)Age, mean no. years (SD)40.9 (11.2)Male gender, n (%)40 (64.5)Race, n (%) White58 (93.5) Other4 (6.5)Cigarette smoking status, n (%) Former smoker21 (33.9) Never smoked41 (66.1)Smoking history, mean no. pack-years (SD)4.6 (3.3)Asthma duration, mean no. years (SD)25.5 (13.2)Asthma severity, n (%) 60% ≤ FEV1 < 80% of predicted46 (74.2) FEV1 ≥80% of predicted16 (25.8)Bronchial reversibilitya, mean % (SD)21.3 (8.2)FEV1 absolute reversibilityb, mean L (SD)0.568 (0.261)\nacalculated as 100 x (FEV1 [post-bronchodilator] – FEV1 [pre-bronchodilator]/FEV1 [pre-bronchodilator]); bcalculated as FEV1 (post-bronchodilator) – FEV1 (pre-bronchodilator).FEV1, forced expiratory volume in 1 s; SD, standard deviation.Figure 1\nDiagram of patient flow.\n\n\nBODY.RESULTS.EFFICACY:\nMean baseline FEV1 values were similar across all treatment periods, ranging from 2.63 L to 2.69 L. The full profile of lung function after each treatment is shown in Figure 2. The onset of action of abediterol was rapid, with significant bronchodilation compared with placebo observed at the first study time point (15 min) post-dose (p < 0.0001 for all doses of abediterol versus placebo, Figure 2a). All doses of abediterol produced significantly greater improvements in peak FEV1 from baseline compared with placebo (p < 0.0001 for all doses, Table 2), and the magnitude of effect of abediterol 0.625, 1.25, and 2.5 μg was not significantly different from salbutamol 400 μg (Figure 3, Table 3). Least squares means differences in change from baseline in peak FEV1 versus placebo were 0.274, 0.322, 0.371, and 0.405 L for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively, and 0.353 L for salbutamol 400 μg (Table 3). Median time to peak FEV1 was 3 h post-dose for abediterol (all doses), 2 h post-dose for placebo, and 1 h post-dose for salbutamol 400 μg.Figure 2\nChange from baseline in FEV\n1\nover time. Data are presented as least squares means with 95% confidence intervals: a) p < 0.0001 versus placebo for salbutamol and all doses of abediterol at all time points; b) p < 0.0001 for all doses of abediterol 0–24 h post-dose and abediterol 1.25 and 2.5 μg 36 h post-dose versus placebo, p = 0.002 and p = 0.01 for abediterol 0.313 and 0.625 μg, respectively, at 36 h post-dose versus placebo. FEV1, forced expiratory volume in 1 s. Table 2 \n Change from baseline in pulmonary function parameters (L): intention-to-treat population \n Placebo N = 59 Salbutamol 400 μg N = 58 Abediterol 0.313 μg N = 60 Abediterol 0.625 μg N = 60 Abediterol 1.25 μg N = 60 Abediterol 2.5 μg N = 61 Peak FEV 1 \n 0.202 (0.045) 0.555**** (0.045) 0.477**** †† (0.045) 0.524**** (0.045) 0.573**** (0.045) 0.608**** (0.045) Trough FEV 1 \n a \n -0.054 (0.039) -0.076 (0.039) 0.166**** †††† (0.039) 0.205**** †††† (0.039) 0.278**** †††† (0.039) 0.346**** †††† (0.039) Normalized FEV 1 AUC 0–6 \n 0.088 (0.039) 0.347**** (0.039) 0.354**** (0.038) 0.403**** † (0.038) 0.461**** †††† (0.038) 0.496**** †††† (0.038) Normalized FEV 1 AUC 0–12 \n 0.063 (0.040) 0.219**** (0.040) 0.348**** †††† (0.039) 0.393**** †††† (0.039) 0.455**** †††† (0.039) 0.496**** †††† (0.040) Normalized FEV 1 AUC 0–24 \n 0.007 (0.039) 0.100** (0.039) 0.282**** †††† (0.039) 0.321**** †††† (0.038) 0.390**** †††† (0.038) 0.446**** †††† (0.039) Peak FVC 0.222 (0.034) 0.361**** (0.034) 0.317** (0.034) 0.329*** (0.034) 0.361**** (0.034) 0.380**** (0.034) Trough FVC a \n -0.028 (0.035) -0.065 (0.035) 0.094** †††† (0.035) 0.050* †† (0.035) 0.111*** †††† (0.035) 0.120*** †††† (0.035) Data are least squares means (standard error). \n a trough values are the mean of the 23 and 24 h post-dose measurements. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus placebo; † p < 0.05, †† p < 0.01, †††† p < 0.0001 versus salbutamol 400 μg (analysis of covariance). AUC, area under the FEV 1 curve; FEV 1 , forced expiratory volume in 1 s; FVC, forced vital capacity. Figure 3\nChange from baseline in peak FEV\n1\nat Day 1. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol and salbutamol 400 μg; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Table 3 \n Comparison of peak FEV \n 1 \n for abediterol versus placebo and abediterol versus salbutamol \n Least–squares mean difference (standard error) 95% confidence interval Abediterol 2.5 μg vs. placebo 0.405 (0.027) 0.353, 0.458 Abediterol 1.25 μg vs. placebo 0.371 (0.027) 0.318, 0.424 Abediterol 0.625 μg vs. placebo 0.322 (0.027) 0.269, 0.375 Abediterol 0.313 μg vs. placebo 0.274 (0.027) 0.221, 0.327 Salbutamol 400 μg vs. placebo 0.353 (0.027) 0.299, 0.406 Abediterol 2.5 μg vs. salbutamol 400 μg 0.0529 (0.0270) -0.0002, 0.1061 Abediterol 1.25 μg vs. salbutamol 400 μg 0.0181 (0.0269) -0.0350, 0.0711 Abediterol 0.625 μg vs. salbutamol 400 μg -0.0308 (0.0269) -0.839, 0.0223 Abediterol 0.313 μg vs. salbutamol 400 μg -0.0783 (0.0269) -0.1313, -0.0253 Data included for comparisons of all doses of abediterol with placebo and with salbutamol. Statistically significant increases in FEV1 were sustained at 24 h post-dose compared with placebo (p < 0.0001 for all doses of abediterol 0–24 h post-dose, Figure 2b). Abediterol significantly improved trough FEV1 from baseline compared with placebo (p < 0.0001 for all doses; Figure 4). Least squares means differences in change from baseline in trough FEV1 versus placebo were 0.219 L (95% CI 0.136, 0.302), 0.259 L (95% CI 0.176, 0.342), 0.332 L (95% CI 0.249, 0.415), and 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively (Table 2). The effects of salbutamol on FEV1 were not sustained after 6 h post-dose, as expected for a SABA (Figure 2b).Figure 4\nChange from baseline in trough FEV\n1\nat Day 2. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Change from baseline in normalized FEV1 AUC0–6, FEV1 AUC0–12, and FEV1 AUC0–24 was significantly higher for all doses of abediterol versus placebo (p < 0.0001 for all, Table 2), with dose-related effects observed. The change from baseline in normalized FEV1 AUC0–6 was significantly greater for abediterol 0.625–2.5 μg versus salbutamol 400 μg (p < 0.05, Table 2).\n\nBODY.RESULTS.EFFICACY.FVC:\nMean baseline FVC values were comparable across treatment periods, ranging from 4.23 L to 4.30 L. Abediterol produced significantly greater change from baseline in peak FVC compared with placebo (p < 0.01 for all doses, Table 2). The change from baseline in peak FVC was similar to that observed with salbutamol. Additionally, abediterol significantly improved trough FVC from baseline compared with placebo (p < 0.05 for all doses, Table 2). All doses of abediterol significantly (p < 0.05) increased FVC versus placebo up to 24 h post-dose except for the 0.625 μg dose at 12 and 23 h.\n\nBODY.RESULTS.SAFETY AND TOLERABILITY:\nThe percentage of patients experiencing TEAEs was similar across all treatment groups (Table 4). Most TEAEs were mild to moderate in intensity, no patient discontinued the study due to a TEAE, and there were no serious adverse events or deaths. The most common TEAEs were headache and nasopharyngitis, reported by 41.9 and 27.4% of patients, respectively. The percentage of patients that experienced at least one treatment-related TEAE was <10% for all doses of abediterol, and was similar to that of salbutamol and placebo. No clinically significant changes in serum glucose or potassium concentrations were observed over time in any of the treatments groups (Table 5). Minor variations in heart rate over time were observed for all treatment groups, including placebo; none were considered to be dose related or clinically relevant (Table 5). There were no clinically relevant changes in clinical laboratory tests, blood pressure, or electrocardiogram parameters.Table 4\nTEAEs occurring in ≥2% of patients in any treatment group: safety population\nNumber (%) of patients reporting TEAEsPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Total N = 62Any TEAE16 (27.1)12 (20.7)18 (30.0)16 (26.7)18 (30.0)23 (37.7)51 (82.3) Headache7 (11.9)6 (10.3)4 (6.7)7 (11.7)8 (13.3)7 (11.5)26 (41.9) Nasopharyngitis1 (1.7)08 (13.3)3 (5.0)4 (6.7)2 (3.3)17 (27.4) Chest discomfort2 (3.4)01 (1.7)002 (3.3)5 (8.1) Wheezing1 (1.7)2 (3.4)1 (1.7)01 (1.7)04 (6.5) Oropharyngeal pain01 (1.7)01 (1.7)03 (4.9)4 (6.5) Cough2 (3.4)001 (1.7)003 (4.8)TEAE, treatment-emergent adverse event.Table 5\nChange in clinical safety parameters from pre-dose values following treatment: safety population\nPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Serum glucose (mmol/L) 4 hours post-dose-0.37 (0.56)-0.24 (0.55)-0.19 (0.71)-0.27 (0.70)-0.16 (0.94)-0.23 (0.74) 24 hours post-dose0.01 (0.38)0.00 (0.33)0.13 (0.51)0.06 (0.62)0.11 (0.45)0.15 (0.59)Serum potassium (mmol/L) 4 hours post-dose-0.01 (0.36)-0.09 (0.38)-0.01 (0.35)0.10 (0.37)0.02 (0.36)-0.04 (0.35) 24 hours post-dose-0.02 (0.37)0.04 (0.34)0.05 (0.33)0.15 (0.32)0.04 (0.28)0.05 (0.38)Heart rate (bpm) 0.75 h post-dose-0.6 (9.1)2.0 (10.1)-0.7 (6.1)-0.8 (7.5)-1.5 (9.0)-0.8 (6.2) 2 h post-dose-3.4 (8.7)-1.0 (8.0)-2.0 (6.6)-3.4 (8.1)-4.3 (8.4)-3.2 (6.7) 6 h post-dose5.1 (10.2)7.2 (7.7)6.2 (7.1)6.2 (8.0)6.4 (9.8)5.3 (7.2) 24 h post-dose-2.1 (9.8)-1.9 (8.0)-1.6 (6.7)-0.8 (8.4)-0.9 (8.7)-0.6 (7.6)Data are mean (standard deviation).bpm, beats per minute.\n\nBODY.DISCUSSION:\nIn this study, abediterol 0.313, 0.625, 1.25, and 2.5 μg provided clinically and statistically significant improvements in peak bronchodilation versus placebo in patients with persistent, stable asthma. Abediterol doses ≥0.625 μg also caused bronchodilation compatible with once a day administration, as improvements in trough FEV1 >250 mL compared with placebo were observed. The peak bronchodilatory effect of abediterol was similar to that of salbutamol 400 μg for all doses ≥0.625 μg. A mean increase in peak FEV1 >200 mL is considered clinically relevant in asthma [1]. This magnitude of change was exceeded for all doses of abediterol, including the lowest dose, 0.313 μg. The primary objective of this study was to define abediterol doses that caused peak bronchodilation similar to salbutamol, the most widely prescribed SABA [21]. High doses of β2-agonists may increase the degree of bronchodilation but also can cause excessive systemic side effects through systemic absorption [22]. However, the current study demonstrates that the doses of abediterol from 0.625 to 2.5 μg induce acute bronchodilation similar to salbutamol, and therefore treatment with higher doses may not be warranted. Significant bronchodilation compared with placebo was achieved with a range of abediterol doses from 0.313 to 2.5 μg, consistent with the high potency of abediterol demonstrated in preclinical studies that suggested a potential for significant bronchodilation in vivo[7]. Abediterol has higher potency than formoterol, salmeterol, indacaterol, olodaterol, and vilanterol in anesthetized guinea pigs [7, 10] as well as higher potency than salmeterol and indacaterol with comparable potency to formoterol in isolated human bronchi [7]. Based on the results of this preliminary dose-ranging study and the guidelines published by Chowdhury et al. [16], the clinically effective dose in asthma is considered most likely to be in the range of 0.625–2.5 μg. Significant bronchodilation compared with placebo was observed for salbutamol and all doses of abediterol at the first-assessed time point (15 min) post-dose; this indicates a rapid onset of action for both salbutamol and abediterol. Salmeterol and formoterol, both twice-daily LABAs, are widely used as maintenance therapy in COPD and asthma due to their longer durations of action compared with salbutamol [6, 18]. In clinical studies in patients with asthma, the onset of action of formoterol 24 μg is consistently faster than that of salmeterol 50 μg, with a median onset of action of 3.6 min compared with 31.0 min in one study [23, 24]. In vitro experiments in isolated human bronchi suggest that the onset of action of abediterol is faster than salmeterol and is not significantly different from formoterol and indacaterol [7]. The prolonged duration of action of abediterol, with significant improvements in trough FEV1, normalized FEV1 AUC0–24, and trough FVC relative to placebo, and significant bronchodilation maintained up to 36 h post-dose is consistent with the results of previous clinical studies in healthy subjects and patients with COPD or asthma [11, 13, 25]. For example, a recent Phase II clinical study in patients with COPD demonstrated that abediterol 2.5, 5, and 10 μg achieved >200 mL increases in FEV1 at 24 h, and that this bronchodilatory effect was greater than that of indacaterol 150 μg, and the duration of action was similar [25]. Taken together, these results are compatible with a once-daily dosing regimen in asthma and COPD. The convenience of once-daily dosing is likely to improve compliance in patients with asthma, leading to better patient outcomes [26, 27], as shown in various studies, including that of Wells et al. whereby once-daily ICS led to improved compliance in patients with asthma compared with ICS administered multiple times daily [28]. LABA monotherapy treatment is contra-indicated in asthma; co-administration with an ICS is mandated, preferably as a fixed-dose combination to reduce the risk of monotherapy administration. Development of once-daily LABAs for use in fixed-dose combinations with once-daily ICS is important for simplifying treatment regimens and, therefore, helping improve patient compliance and general asthma management. The safety and tolerability profile of abediterol in this study is comparable to that reported in other studies in the abediterol development program [11–13, 25]. Single doses of abediterol 0.313–2.5 μg were safe and well tolerated, and the TEAEs were consistent with the known safety profile of β2-agonists.\n\nBODY.CONCLUSION:\nAbediterol 0.625–2.5 μg demonstrated a peak bronchodilatory response similar to that of a high dose of salbutamol (400 μg) at very low doses, and showed a clear dose–response relationship. Abediterol is a promising new, highly potent, LABA with a sustained duration of action and a favorable safety and tolerability profile. These results suggest that further studies are warranted to further evaluate the efficacy and safety of abediterol in combination with ICS in both asthma and COPD.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nAdditional file 1:\nIndependent Ethics Committees (IEC).\n(DOCX 17 KB)\n\n**Question:** Compared to placebo what was the result of Abediterol on peak FEV1?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
905
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A dose-ranging study of the bronchodilator effects of abediterol (LAS100977), a long-acting β\n\n ABSTRACT.BACKGROUND:\nLong-acting β2-adrenergic agonists (LABAs) are recommended in combination with inhaled corticosteroids (ICSs) for asthma management. Abediterol is a novel, selective, potent, once-daily LABA in development for treatment of asthma and chronic obstructive pulmonary disease. This study aimed to determine abediterol doses with similar peak bronchodilatory effect to salbutamol 400 μg, and duration of action compatible with once-daily dosing in patients with persistent, stable asthma.\n\nABSTRACT.METHODS:\nThis was a Phase II, randomized, double-blind, double-dummy, crossover, placebo-controlled, dose-ranging study (ClinicalTrials.gov NCT01425801) in 62 patients with mild-to-moderate asthma who were also receiving an ICS. Patients received single doses of abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg, or placebo in the morning. Spirometry was performed up to 36 h post-dose; safety and tolerability were assessed throughout the study. The primary endpoint was change from baseline in peak forced expiratory volume in 1 s (FEV1). Additional endpoints included trough FEV1, normalized area under the FEV1 curve (FEV1 AUC) up to 24 h post-dose, and peak and trough forced vital capacity (FVC).\n\nABSTRACT.RESULTS:\nAbediterol produced dose-dependent improvements in peak FEV1 from baseline compared with placebo, from 0.274 (95% CI 0.221, 0.327) to 0.405 L (95% CI 0.353, 0.458) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001 all doses). Abediterol 0.625, 1.25, and 2.5 μg had similar magnitude of peak FEV1 effect to salbutamol. Dose-dependent changes from baseline in trough FEV1 versus placebo were 0.219 (95% CI 0.136, 0.302) to 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001). All abediterol doses achieved significant improvements versus placebo in FEV1 AUC 0–6, 0–12, and 0–24 h, and peak and trough FVC (p < 0.05). Less than 10% of patients experienced treatment-related adverse events for each dose of abediterol; most were mild to moderate in intensity and the most common were headache and nasopharyngitis. There were no clinically relevant changes in heart rate.\n\nABSTRACT.CONCLUSIONS:\nAbediterol 0.625–2.5 μg provided dose-dependent, clinically and statistically significant bronchodilation versus placebo in patients with asthma, with a peak effect similar to salbutamol and duration of action compatible with once-daily dosing. All doses of abediterol were well tolerated.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/1471-2466-14-176) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nThe use of β2-adrenergic agonist bronchodilators in combination with inhaled corticosteroids (ICSs) to manage the symptoms of asthma is recommended by Global Initiative for Asthma (GINA) guidelines [1]. Currently licensed bronchodilators include salbutamol, a short-acting β2-adrenergic agonist (SABA) administered as required [2], salmeterol and formoterol, two long-acting β2-adrenergic agonists (LABAs) administered twice daily in combination with an ICS [3, 4], and the recently licensed vilanterol, a LABA administered once daily in fixed-dose combination with the ICS fluticasone furoate [5]. There is ongoing interest to develop once-daily medications to further simplify treatment regimens and improve patient compliance [6]. Abediterol is a novel, once-daily LABA in clinical development as a fixed-dose combination with an ICS for the treatment of asthma and chronic obstructive pulmonary disease (COPD) [7]. Abediterol displays high affinity for β2-adrenoceptors, plus a higher functional selectivity for β2-adrenoceptors over β1-adrenoceptors than formoterol, indacaterol, vilanterol and olodaterol in cellular models [7–9]. The potency of abediterol is also higher than salmeterol and indacaterol in isolated human bronchi and, in addition to these two LABAs, vilanterol and olodaterol in animal models [7, 10]. Abediterol has a rapid onset and prolonged duration of action [7, 10]. A Phase II clinical trial in patients with asthma provided results that were consistent with the preclinical data; abediterol resulted in significant improvements in lung function at 5 min post-dose that were greater than improvements seen with salmeterol, and significant bronchodilation was sustained up to 24 h post-dose [11]. These characteristics make abediterol an ideal candidate for once-daily dosing. Moreover, abediterol has a favorable cardiovascular safety and tolerability profile: in anesthetized dogs, abediterol had a lesser effect on heart rate compared with salmeterol, formoterol, and indacaterol at concentrations that produced comparable bronchodilation [7, 12]. In early phase clinical trials, doses of abediterol ≤10 μg were safe and well tolerated and did not demonstrate any clinically relevant effects on heart rate in healthy subjects [13], patients with asthma [14], or patients with COPD [15]. Drug regulatory authorities have concerns that excessively high LABA doses may cause unacceptable side effects [16]. The trough forced expiratory volume in 1 s (FEV1), measured 24 h after dosing, is an important endpoint to assess once-daily LABA efficacy [16]. The maximum effect achieved soon after dosing, otherwise known as the 'peak' effect, is also of interest for efficacy and safety reasons. A very high peak effect may indicate high levels of acute drug exposure that could predispose patients to β-agonist-mediated side effects, such as cardiovascular adverse events, when receiving chronic treatment [17]. Moreover, the ideal pharmacologic characteristic of a once-daily LABA is a low 'peak to trough ratio', ensuring relatively stable lung function throughout the day [18]. The peak effect of salbutamol is well characterized, and causes acute bronchodilation without safety concerns. The bronchodilator response following single-dose administration of salbutamol (4 x 100 μg) is commonly used as one of the diagnostic tests for asthma, as recommended by a joint European Respiratory Society (ERS)/American Thoracic Society (ATS) taskforce [19]. This effect is, therefore, a reasonable benchmark for a novel LABA to achieve and this degree of acute bronchodilation is considered clinically meaningful. A major regulatory agency recommended that the peak effects of abediterol should be compared to those of salbutamol to guide dose selection for further clinical trials in order to avoid a dose that induces a much greater degree of bronchodilation than is necessary for clinical effectiveness, which could increase the potential for safety concerns during chronic treatment. We report a placebo-controlled, dose-ranging, crossover study comparing the acute bronchodilator effects of abediterol to salbutamol in patients with persistent stable asthma. The primary aim of the study was to investigate the peak bronchodilatory effect of abediterol versus placebo and to confirm it was in the same range as that of salbutamol. The full profile of lung function over 24 h, including the trough FEV1, was also measured. The goal of this study was to enable the selection of abediterol doses for future long-term clinical trials, based on a peak effect similar to that of salbutamol 400 μg metered dose inhaler (MDI), coupled with a trough effect compatible with a 24-h duration of action. This trial design was a specific recommendation by a major regulatory agency.\n\nBODY.METHODS.STUDY DESIGN:\nThis was a Phase IIa, randomized, double-blind, double-dummy, 6-way crossover, single-dose, multicenter, dose-ranging study conducted between August 2011 and January 2012. The study was conducted in two sites in the UK and seven in Germany. The study protocol was approved by an independent ethics committee in each country (see Additional file 1) and complied with the Declaration of Helsinki and the International Conference on Harmonisation and Good Clinical Practice guidelines. Patients provided written, informed consent. Following a screening visit and a 12–16 day run-in period to assess clinical stability, participants were randomized to 1 of 6 treatment sequences according to a balanced Williams design for crossover studies (1:1:1:1:1:1). Block randomization was performed by the Sponsor, using a computer-generated schedule to assign a treatment sequence to each patient randomization number. The block size and randomization schedule were not communicated to investigators. Patients received abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg (administered as four inhalations with 100 μg per inhalation), or placebo (at 09:00 h ±1 h) at each visit, with a washout period of 7–14 days. A follow-up telephone call to assess safety was made approximately 2 weeks following the final treatment visit or premature discontinuation. Abediterol and placebo to abediterol were administered via a multidose dry powder inhaler (Genuair®). Salbutamol and placebo to salbutamol were administered via a pressurized MDI (VentolinTM EvohalerTM). In order to preserve study blinding, independent, trained personnel prepared the medication, delivered the device to patients, and collected the device following inhalation.\n\nBODY.METHODS.PATIENTS:\nEligible participants were men or women aged 18–70 years with a clinical diagnosis of persistent asthma (according to GINA guidelines) for ≥6 months prior to screening. Eligible patients had an FEV1 >60% and ≤85% of the predicted normal value, with an FEV1 reversibility ≥12%, and an absolute increase in FEV1 of ≥200 mL over their baseline value following inhalation of salbutamol 400 μg at screening. Comparable FEV1 measurements were required at screening and pre-dose at each visit in order to ensure there were no carry-over effects from previous treatment periods (limit of variability ±20%). At enrollment, all patients were using only an ICS to control their asthma, and had been on a stable dose and regimen for at least 4 weeks prior to screening, up to the equivalent of 1600 μg/day of beclometasone dipropionate, and had not previously been exposed to abediterol. Exclusion criteria included: a smoking history within the past 6 months or >10 pack-years; presence or history of relevant pulmonary disease or any other clinically relevant disease or abnormality as judged by the investigator; difficult-to-treat asthma; any respiratory tract infection ≤6 weeks prior to screening; hospitalization or emergency room treatment for asthma ≤3 months prior to screening; treatment with β2-antagonists; and positive laboratory tests for illicit drugs. The use of anticholinergics, SABAs (other than salbutamol), LABAs, continuous oral or parenteral corticosteroids, methyl-xanthines, cromolyn sodium, nedocromil, leukotriene modifiers, anti-immunoglobulin E, β1-blocking agents, and other investigational drugs was prohibited during the study. Patients using prohibited medications underwent up to 6 weeks' washout before the screening visit. Salbutamol pressurized MDI (100 μg/puff) was the only reliever medication permitted and was used on an as-needed basis and avoided 6 h prior to and during a treatment visit unless absolutely necessary.\n\nBODY.METHODS.EFFICACY ASSESSMENTS:\nPulmonary function tests were conducted at screening, pre-dose, and at 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, 14, 23, 24, and 36 h post-dose at each treatment visit using standardized spirometers at each investigational site (eResearchTechnology, Philadelphia, PA, USA). Prior to the first spirometry measure, the technician demonstrated the procedure and allowed the patient two practice attempts. At each study time point, the maneuvers were repeated until three technically adequate measurements were made according to ERS/ATS guidelines for acceptability and repeatability [20], with a maximum of eight attempts allowed.\n\nBODY.METHODS.SAFETY AND TOLERABILITY ASSESSMENTS:\nTreatment-emergent adverse events (TEAEs), including serious adverse events, were recorded throughout the study and follow-up period. A physical examination was performed at screening and at 36 h post-dose following the last treatment period (Visit 6), and blood pressure and 12-lead electrocardiogram measures were made at screening, pre-dose, and at 0.75, 2, 6, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6. Clinical laboratory tests (blood chemistry, hematology, urinalysis, and a serum pregnancy test) were performed at screening and at 36 h post-dose at Visit 6. Blood glucose and serum potassium levels were assessed at screening, pre-dose, and at 4, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6.\n\nBODY.METHODS.STATISTICS:\nThe primary efficacy endpoint was change from baseline in peak FEV1 (defined as the highest FEV1 value observed within 4 h of the administration of the investigational medicinal product [IMP]) on Day 1. Secondary endpoints included change from baseline in peak forced vital capacity (FVC) on Day 1, trough FEV1 (defined as the highest average of the maneuvers performed at 23–24 h post IMP administration) and FVC on Day 2, change from baseline in FEV1 and FVC at each time point up to 36 h post-dose, and change from baseline in normalized FEV1 area under the curve (AUC; defined as the area between baseline FEV1 and the FEV1 curve, from 0 to × hours divided by × hours) at 0–6, 0–12, and 0–24 h post-dose. In accordance with the GINA guidelines, a change in FEV1 of 200 mL was considered clinically relevant [1]. A population of 48 patients was necessary to achieve 80% power to detect a 150 mL change from baseline in peak FEV1 for any dose of abediterol compared with placebo. Assuming a 15% withdrawal rate, the protocol required at least 54 patients to be randomized. All reported efficacy assessments were analyzed in the intention-to-treat population, defined as all randomized patients who took at least one dose of the IMP (placebo, abediterol, or salbutamol) and had at least one baseline and one post-dose FEV1 value for at least one treatment visit. Safety analyses are reported in the safety population, which consisted of all randomized patients who took at least one dose of the IMP. All statistical analyses were performed using Statistical Analysis System (SAS Institute Inc., Cary, NC, USA) version 9.1.3 software. Efficacy variables were analyzed using an analysis of covariance model for crossover designs, with fixed-effect factors of sequence, treatment, and period, patient within sequence as a random effect, and baseline FEV1 or FVC as a covariate, as appropriate. Between-treatments comparisons were performed by means of contrasts on the treatment factor. The difference between treatments was estimated by the difference between the least squares means. Statistical comparisons between active treatments and placebo were two-sided hypothesis tests with a significance level of 0.05.\n\nBODY.RESULTS.STUDY POPULATION:\nA total of 115 patients were screened, with 62 patients randomized (40/62 [64.5%] male, mean age 40.9 ± 11.2 years). Patient demographics are presented in Table 1. A diagram of patient flow is presented in Figure 1. All 62 patients were included in the intention-to-treat and safety populations. One protocol violation occurred when a patient was inadvertently given the treatment scheduled for Visit 6 at Visit 5, and vice versa. Data for this patient were analyzed according to the treatments received. Four patients withdrew or were discontinued from the study: two due to inability to comply with the study schedule, one who did not fulfill stability criteria, and one for unknown reasons (patient did not return for the premature discontinuation visit).Table 1\nPatient demographics at baseline: safety population\nPatients (N = 62)Age, mean no. years (SD)40.9 (11.2)Male gender, n (%)40 (64.5)Race, n (%) White58 (93.5) Other4 (6.5)Cigarette smoking status, n (%) Former smoker21 (33.9) Never smoked41 (66.1)Smoking history, mean no. pack-years (SD)4.6 (3.3)Asthma duration, mean no. years (SD)25.5 (13.2)Asthma severity, n (%) 60% ≤ FEV1 < 80% of predicted46 (74.2) FEV1 ≥80% of predicted16 (25.8)Bronchial reversibilitya, mean % (SD)21.3 (8.2)FEV1 absolute reversibilityb, mean L (SD)0.568 (0.261)\nacalculated as 100 x (FEV1 [post-bronchodilator] – FEV1 [pre-bronchodilator]/FEV1 [pre-bronchodilator]); bcalculated as FEV1 (post-bronchodilator) – FEV1 (pre-bronchodilator).FEV1, forced expiratory volume in 1 s; SD, standard deviation.Figure 1\nDiagram of patient flow.\n\n\nBODY.RESULTS.EFFICACY:\nMean baseline FEV1 values were similar across all treatment periods, ranging from 2.63 L to 2.69 L. The full profile of lung function after each treatment is shown in Figure 2. The onset of action of abediterol was rapid, with significant bronchodilation compared with placebo observed at the first study time point (15 min) post-dose (p < 0.0001 for all doses of abediterol versus placebo, Figure 2a). All doses of abediterol produced significantly greater improvements in peak FEV1 from baseline compared with placebo (p < 0.0001 for all doses, Table 2), and the magnitude of effect of abediterol 0.625, 1.25, and 2.5 μg was not significantly different from salbutamol 400 μg (Figure 3, Table 3). Least squares means differences in change from baseline in peak FEV1 versus placebo were 0.274, 0.322, 0.371, and 0.405 L for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively, and 0.353 L for salbutamol 400 μg (Table 3). Median time to peak FEV1 was 3 h post-dose for abediterol (all doses), 2 h post-dose for placebo, and 1 h post-dose for salbutamol 400 μg.Figure 2\nChange from baseline in FEV\n1\nover time. Data are presented as least squares means with 95% confidence intervals: a) p < 0.0001 versus placebo for salbutamol and all doses of abediterol at all time points; b) p < 0.0001 for all doses of abediterol 0–24 h post-dose and abediterol 1.25 and 2.5 μg 36 h post-dose versus placebo, p = 0.002 and p = 0.01 for abediterol 0.313 and 0.625 μg, respectively, at 36 h post-dose versus placebo. FEV1, forced expiratory volume in 1 s. Table 2 \n Change from baseline in pulmonary function parameters (L): intention-to-treat population \n Placebo N = 59 Salbutamol 400 μg N = 58 Abediterol 0.313 μg N = 60 Abediterol 0.625 μg N = 60 Abediterol 1.25 μg N = 60 Abediterol 2.5 μg N = 61 Peak FEV 1 \n 0.202 (0.045) 0.555**** (0.045) 0.477**** †† (0.045) 0.524**** (0.045) 0.573**** (0.045) 0.608**** (0.045) Trough FEV 1 \n a \n -0.054 (0.039) -0.076 (0.039) 0.166**** †††† (0.039) 0.205**** †††† (0.039) 0.278**** †††† (0.039) 0.346**** †††† (0.039) Normalized FEV 1 AUC 0–6 \n 0.088 (0.039) 0.347**** (0.039) 0.354**** (0.038) 0.403**** † (0.038) 0.461**** †††† (0.038) 0.496**** †††† (0.038) Normalized FEV 1 AUC 0–12 \n 0.063 (0.040) 0.219**** (0.040) 0.348**** †††† (0.039) 0.393**** †††† (0.039) 0.455**** †††† (0.039) 0.496**** †††† (0.040) Normalized FEV 1 AUC 0–24 \n 0.007 (0.039) 0.100** (0.039) 0.282**** †††† (0.039) 0.321**** †††† (0.038) 0.390**** †††† (0.038) 0.446**** †††† (0.039) Peak FVC 0.222 (0.034) 0.361**** (0.034) 0.317** (0.034) 0.329*** (0.034) 0.361**** (0.034) 0.380**** (0.034) Trough FVC a \n -0.028 (0.035) -0.065 (0.035) 0.094** †††† (0.035) 0.050* †† (0.035) 0.111*** †††† (0.035) 0.120*** †††† (0.035) Data are least squares means (standard error). \n a trough values are the mean of the 23 and 24 h post-dose measurements. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus placebo; † p < 0.05, †† p < 0.01, †††† p < 0.0001 versus salbutamol 400 μg (analysis of covariance). AUC, area under the FEV 1 curve; FEV 1 , forced expiratory volume in 1 s; FVC, forced vital capacity. Figure 3\nChange from baseline in peak FEV\n1\nat Day 1. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol and salbutamol 400 μg; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Table 3 \n Comparison of peak FEV \n 1 \n for abediterol versus placebo and abediterol versus salbutamol \n Least–squares mean difference (standard error) 95% confidence interval Abediterol 2.5 μg vs. placebo 0.405 (0.027) 0.353, 0.458 Abediterol 1.25 μg vs. placebo 0.371 (0.027) 0.318, 0.424 Abediterol 0.625 μg vs. placebo 0.322 (0.027) 0.269, 0.375 Abediterol 0.313 μg vs. placebo 0.274 (0.027) 0.221, 0.327 Salbutamol 400 μg vs. placebo 0.353 (0.027) 0.299, 0.406 Abediterol 2.5 μg vs. salbutamol 400 μg 0.0529 (0.0270) -0.0002, 0.1061 Abediterol 1.25 μg vs. salbutamol 400 μg 0.0181 (0.0269) -0.0350, 0.0711 Abediterol 0.625 μg vs. salbutamol 400 μg -0.0308 (0.0269) -0.839, 0.0223 Abediterol 0.313 μg vs. salbutamol 400 μg -0.0783 (0.0269) -0.1313, -0.0253 Data included for comparisons of all doses of abediterol with placebo and with salbutamol. Statistically significant increases in FEV1 were sustained at 24 h post-dose compared with placebo (p < 0.0001 for all doses of abediterol 0–24 h post-dose, Figure 2b). Abediterol significantly improved trough FEV1 from baseline compared with placebo (p < 0.0001 for all doses; Figure 4). Least squares means differences in change from baseline in trough FEV1 versus placebo were 0.219 L (95% CI 0.136, 0.302), 0.259 L (95% CI 0.176, 0.342), 0.332 L (95% CI 0.249, 0.415), and 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively (Table 2). The effects of salbutamol on FEV1 were not sustained after 6 h post-dose, as expected for a SABA (Figure 2b).Figure 4\nChange from baseline in trough FEV\n1\nat Day 2. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Change from baseline in normalized FEV1 AUC0–6, FEV1 AUC0–12, and FEV1 AUC0–24 was significantly higher for all doses of abediterol versus placebo (p < 0.0001 for all, Table 2), with dose-related effects observed. The change from baseline in normalized FEV1 AUC0–6 was significantly greater for abediterol 0.625–2.5 μg versus salbutamol 400 μg (p < 0.05, Table 2).\n\nBODY.RESULTS.EFFICACY.FVC:\nMean baseline FVC values were comparable across treatment periods, ranging from 4.23 L to 4.30 L. Abediterol produced significantly greater change from baseline in peak FVC compared with placebo (p < 0.01 for all doses, Table 2). The change from baseline in peak FVC was similar to that observed with salbutamol. Additionally, abediterol significantly improved trough FVC from baseline compared with placebo (p < 0.05 for all doses, Table 2). All doses of abediterol significantly (p < 0.05) increased FVC versus placebo up to 24 h post-dose except for the 0.625 μg dose at 12 and 23 h.\n\nBODY.RESULTS.SAFETY AND TOLERABILITY:\nThe percentage of patients experiencing TEAEs was similar across all treatment groups (Table 4). Most TEAEs were mild to moderate in intensity, no patient discontinued the study due to a TEAE, and there were no serious adverse events or deaths. The most common TEAEs were headache and nasopharyngitis, reported by 41.9 and 27.4% of patients, respectively. The percentage of patients that experienced at least one treatment-related TEAE was <10% for all doses of abediterol, and was similar to that of salbutamol and placebo. No clinically significant changes in serum glucose or potassium concentrations were observed over time in any of the treatments groups (Table 5). Minor variations in heart rate over time were observed for all treatment groups, including placebo; none were considered to be dose related or clinically relevant (Table 5). There were no clinically relevant changes in clinical laboratory tests, blood pressure, or electrocardiogram parameters.Table 4\nTEAEs occurring in ≥2% of patients in any treatment group: safety population\nNumber (%) of patients reporting TEAEsPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Total N = 62Any TEAE16 (27.1)12 (20.7)18 (30.0)16 (26.7)18 (30.0)23 (37.7)51 (82.3) Headache7 (11.9)6 (10.3)4 (6.7)7 (11.7)8 (13.3)7 (11.5)26 (41.9) Nasopharyngitis1 (1.7)08 (13.3)3 (5.0)4 (6.7)2 (3.3)17 (27.4) Chest discomfort2 (3.4)01 (1.7)002 (3.3)5 (8.1) Wheezing1 (1.7)2 (3.4)1 (1.7)01 (1.7)04 (6.5) Oropharyngeal pain01 (1.7)01 (1.7)03 (4.9)4 (6.5) Cough2 (3.4)001 (1.7)003 (4.8)TEAE, treatment-emergent adverse event.Table 5\nChange in clinical safety parameters from pre-dose values following treatment: safety population\nPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Serum glucose (mmol/L) 4 hours post-dose-0.37 (0.56)-0.24 (0.55)-0.19 (0.71)-0.27 (0.70)-0.16 (0.94)-0.23 (0.74) 24 hours post-dose0.01 (0.38)0.00 (0.33)0.13 (0.51)0.06 (0.62)0.11 (0.45)0.15 (0.59)Serum potassium (mmol/L) 4 hours post-dose-0.01 (0.36)-0.09 (0.38)-0.01 (0.35)0.10 (0.37)0.02 (0.36)-0.04 (0.35) 24 hours post-dose-0.02 (0.37)0.04 (0.34)0.05 (0.33)0.15 (0.32)0.04 (0.28)0.05 (0.38)Heart rate (bpm) 0.75 h post-dose-0.6 (9.1)2.0 (10.1)-0.7 (6.1)-0.8 (7.5)-1.5 (9.0)-0.8 (6.2) 2 h post-dose-3.4 (8.7)-1.0 (8.0)-2.0 (6.6)-3.4 (8.1)-4.3 (8.4)-3.2 (6.7) 6 h post-dose5.1 (10.2)7.2 (7.7)6.2 (7.1)6.2 (8.0)6.4 (9.8)5.3 (7.2) 24 h post-dose-2.1 (9.8)-1.9 (8.0)-1.6 (6.7)-0.8 (8.4)-0.9 (8.7)-0.6 (7.6)Data are mean (standard deviation).bpm, beats per minute.\n\nBODY.DISCUSSION:\nIn this study, abediterol 0.313, 0.625, 1.25, and 2.5 μg provided clinically and statistically significant improvements in peak bronchodilation versus placebo in patients with persistent, stable asthma. Abediterol doses ≥0.625 μg also caused bronchodilation compatible with once a day administration, as improvements in trough FEV1 >250 mL compared with placebo were observed. The peak bronchodilatory effect of abediterol was similar to that of salbutamol 400 μg for all doses ≥0.625 μg. A mean increase in peak FEV1 >200 mL is considered clinically relevant in asthma [1]. This magnitude of change was exceeded for all doses of abediterol, including the lowest dose, 0.313 μg. The primary objective of this study was to define abediterol doses that caused peak bronchodilation similar to salbutamol, the most widely prescribed SABA [21]. High doses of β2-agonists may increase the degree of bronchodilation but also can cause excessive systemic side effects through systemic absorption [22]. However, the current study demonstrates that the doses of abediterol from 0.625 to 2.5 μg induce acute bronchodilation similar to salbutamol, and therefore treatment with higher doses may not be warranted. Significant bronchodilation compared with placebo was achieved with a range of abediterol doses from 0.313 to 2.5 μg, consistent with the high potency of abediterol demonstrated in preclinical studies that suggested a potential for significant bronchodilation in vivo[7]. Abediterol has higher potency than formoterol, salmeterol, indacaterol, olodaterol, and vilanterol in anesthetized guinea pigs [7, 10] as well as higher potency than salmeterol and indacaterol with comparable potency to formoterol in isolated human bronchi [7]. Based on the results of this preliminary dose-ranging study and the guidelines published by Chowdhury et al. [16], the clinically effective dose in asthma is considered most likely to be in the range of 0.625–2.5 μg. Significant bronchodilation compared with placebo was observed for salbutamol and all doses of abediterol at the first-assessed time point (15 min) post-dose; this indicates a rapid onset of action for both salbutamol and abediterol. Salmeterol and formoterol, both twice-daily LABAs, are widely used as maintenance therapy in COPD and asthma due to their longer durations of action compared with salbutamol [6, 18]. In clinical studies in patients with asthma, the onset of action of formoterol 24 μg is consistently faster than that of salmeterol 50 μg, with a median onset of action of 3.6 min compared with 31.0 min in one study [23, 24]. In vitro experiments in isolated human bronchi suggest that the onset of action of abediterol is faster than salmeterol and is not significantly different from formoterol and indacaterol [7]. The prolonged duration of action of abediterol, with significant improvements in trough FEV1, normalized FEV1 AUC0–24, and trough FVC relative to placebo, and significant bronchodilation maintained up to 36 h post-dose is consistent with the results of previous clinical studies in healthy subjects and patients with COPD or asthma [11, 13, 25]. For example, a recent Phase II clinical study in patients with COPD demonstrated that abediterol 2.5, 5, and 10 μg achieved >200 mL increases in FEV1 at 24 h, and that this bronchodilatory effect was greater than that of indacaterol 150 μg, and the duration of action was similar [25]. Taken together, these results are compatible with a once-daily dosing regimen in asthma and COPD. The convenience of once-daily dosing is likely to improve compliance in patients with asthma, leading to better patient outcomes [26, 27], as shown in various studies, including that of Wells et al. whereby once-daily ICS led to improved compliance in patients with asthma compared with ICS administered multiple times daily [28]. LABA monotherapy treatment is contra-indicated in asthma; co-administration with an ICS is mandated, preferably as a fixed-dose combination to reduce the risk of monotherapy administration. Development of once-daily LABAs for use in fixed-dose combinations with once-daily ICS is important for simplifying treatment regimens and, therefore, helping improve patient compliance and general asthma management. The safety and tolerability profile of abediterol in this study is comparable to that reported in other studies in the abediterol development program [11–13, 25]. Single doses of abediterol 0.313–2.5 μg were safe and well tolerated, and the TEAEs were consistent with the known safety profile of β2-agonists.\n\nBODY.CONCLUSION:\nAbediterol 0.625–2.5 μg demonstrated a peak bronchodilatory response similar to that of a high dose of salbutamol (400 μg) at very low doses, and showed a clear dose–response relationship. Abediterol is a promising new, highly potent, LABA with a sustained duration of action and a favorable safety and tolerability profile. These results suggest that further studies are warranted to further evaluate the efficacy and safety of abediterol in combination with ICS in both asthma and COPD.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nAdditional file 1:\nIndependent Ethics Committees (IEC).\n(DOCX 17 KB)\n\n**Question:** Compared to placebo what was the result of Abediterol on Dose-dependent changes from baseline in trough FEV1?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
910
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Assessing Calcium Intake in Postmenopausal Women\n\n ABSTRACT.INTRODUCTION:\nBecause foods fortified with calcium are increasingly available, the calcium content of calcium-fortified foods may not be adequately captured in traditional assessments of dietary intake, such as dietary records analyzed with commercially available software. The primary objective of our study was to design and test a calcium-focused food frequency questionnaire (CFFFQ) including foods naturally rich in calcium and calcium-fortified foods. Secondary objectives were to review calcium sources and adequacy of intake in black and in white postmenopausal women.\n\nABSTRACT.METHODS:\nWe studied a convenience sample of 46 black and 139 white postmenopausal women (mean [SD] age 69.4 [5.8] years). Subjects completed a multiple-pass interview for 24-hour recall of foods eaten and the 46-item CFFFQ.\n\nABSTRACT.RESULTS:\nThe correlation between measures for total daily calcium intake was moderately strong (r = 0.53, P < .001). The CFFFQ estimated greater total daily calcium intake than did the 24-hour recall (mean [SD], 1,021 [624] mg/d vs 800 [433] mg/d, P < .001). As daily calcium intake increased, the 24-hour recall increasingly underreported calcium (r = 0.41, P < .001) compared with the CFFFQ. Cross-tabulation and Χ2 analyses found that the CFFFQ had greater specificity for lower calcium intakes. For calcium classified by food groups, there was moderate correlation for dairy (r = 0.56, P < .001) and fruits (r = 0.43, P < .001). The CFFFQ overestimated mean total calcium compared with the 24-hour recall by 221 mg/d (P < .001), including within racial groups (195 mg/d for black women, P = .04, and 229 mg/d for white women, P < .001). Dairy was the primary calcium source for both groups (55% of intake for black women and 57% of intake for white women).\n\nABSTRACT.CONCLUSION:\nThe CFFFQ can be used to identify postmenopausal women with inadequate calcium intakes (<800 mg/d) and to identify key sources of dietary calcium. Older black women consume less daily calcium than do older white women.\n\nBODY.INTRODUCTION:\nAs the prevalence of osteoporosis continues to increase, interventions targeting modifiable risk factors receive more emphasis from researchers, clinicians, and public health professionals. Although many nutrients are important to bone health, interventions largely focus on calcium intake (1). Clinicians often prescribe a calcium supplement but pay limited attention to dietary sources of calcium intake (2,3). Other interventions may target only dairy calcium (4,5). However, calcium fortification of foods broadens the possible sources of calcium so that adequate calcium intake may be achieved through a diverse diet. Sources of dietary calcium may differ across cultures and ethnicities according to food preferences and tolerances. For example, blacks and Asians have a higher prevalence of lactose intolerance, which may lead to reduced dairy intake, but also have cultural food preferences, including not drinking milk at meals, that affect calcium intake (6). White women eat more cheese and drink more milk than do black or American Indian women (7). However, research specifically targeting calcium intake and food source is scarce. Previous research from our group investigating the role of socioeconomic status on calcium intake determined that black women consumed fortified grain products more frequently than did white women (8). Because fortified foods are increasingly available (9), the calcium content of calcium-fortified foods may not be adequately captured in traditional assessments of dietary intake, such as dietary records analyzed with commercially available software. The primary aim of this study was to develop and evaluate the feasibility of a calcium-focused food frequency questionnaire (CFFFQ) that incorporates both natural and fortified sources of calcium. Because determining calcium intake has become more difficult as food fortification has become more common, this study compared 2 methods: an interview for 24-hour recall of calcium intake and the CFFFQ. Although supplements can be an important source of calcium, this project focused on food-derived calcium. Secondary aims were to compare food source of calcium between black and white women aged 60 years or older and evaluate the adequacy of calcium intake as assessed using the 24-hour recall and the CFFFQ. Postmenopausal women were studied because they are at high risk for osteoporotic fractures (1).\n\nBODY.METHODS.PARTICIPANTS:\nIn 2005, we recruited a convenience sample of postmenopausal women aged 60 to 80 years who were enrolled in a parent study in the Urbana-Champaign area of Illinois (10) (assessing body composition, bone health, physical activity, physical function, and self-efficacy) to complete additional nutritional assessments. Exclusion criteria for the parent study were neurologic illness, orthopedic limitations, or cognitive limitations that precluded completion of all study testing procedures. All those in the parent study were invited to participate in the current study of evaluation of calcium intake. The human subjects institutional review board of the University of Illinois approved the research protocol, and all participants completed an informed consent form before data collection began.\n\nBODY.METHODS.CALCIUM INTAKE:\nThe CFFFQ and a self-reported 24-hour recall were completed on the same day to assess calcium intake and food source. The CFFFQ is a 46-item food frequency survey focusing on calcium-rich foods. Foods on the questionnaire were chosen based on calcium content and those found in previous studies to be commonly consumed. Foods on the CFFFQ were categorized and presented according to representative food groups: dairy (6 foods), foods with dairy (5 foods), fruits (6 foods), vegetables (3 foods), grains (10 foods), meats (8 foods), and other foods (8 foods) (Appendix). Inclusion of fortified foods was based on availability for purchase in area grocery stores (both local and chain stores). Fortified foods were noted as such on the CFFFQ. A standard portion size was included for each specific food item. Participants entered quantity of food relative to daily, weekly, monthly, or yearly consumption. \"Not eaten\" was given as an option. We entered foods and servings into a database (Microsoft Access 2003, Redmond, Washington) that calculated calcium content on a daily basis. Double entry was used for quality assurance. Participants met with researchers to complete the 24-hour recall. The researchers used a multiple-pass interview style to elicit complete information (11). Foods from the 24-hour recall were categorized into the same food groups as on the CFFFQ and were then entered in Nutritionist Pro version 2.3 (First DataBank, San Bruno, California) to obtain calcium values per day.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nWe analyzed the data by using SPSS version 14.0 (SPSS, Inc, Chicago, Illinois). On the basis of the findings of the Kolmogorov-Smirnov test, the primary variables of interest were not normally distributed. Nonparametric tests were subsequently used to evaluate group or intake assessment differences (Spearman rank correlation and Mann-Whitney). We assessed the adequacy of calcium intake by comparing these 2 assessment methods to two-thirds of the Institute of Medicine's (12) adequate intake amount (used to determine inadequate intake) for women aged 51 years or older. Comparison of the adequacy classification as a measure of specificity of each calcium intake method was completed by using cross-tabulation to demonstrate the relationship between the 2 variables. Chi-square analysis was performed on the cross-tabulated variables. Significance was defined as α = .05.\n\nBODY.RESULTS:\nOf 245 eligible women from the parent study, 185 chose to participate in our study. Of the 185, 46 were black (mean [SD] age, 68.0 [4.85] years) and 139 were white (70.0 [6.0] years).\n\nBODY.RESULTS.COMPARISON OF CFFFQ TO 24-HOUR RECALL:\nCalcium from the CFFFQ was significantly correlated with 24-hour recall for all food groups (P < .001) except for vegetables (P = .08) (Table 1). Estimates obtained with the 24-hour recall method were significantly lower than CFFFQ estimates of total daily calcium, dairy, foods with dairy products, fruits, and vegetables (Table 2). The CFFFQ estimate of mean total calcium compared with the 24-hour recall was higher by 221 mg/d (P < .001), including within racial groups (195 mg/d for black women, P = .04, and 229 mg/d for white women, P < .001). There was no significant difference between the CFFFQ and 24-hour recall methods for calculated calcium intake for grains, meats, and \"other\" foods. This pattern was consistent for the total group and for black and white women separately (Table 2). The Bland-Altman plot illustrates the lack of agreement between methods of assessment for total daily intake (Figure). The positive correlation between the lack of agreement (error score on the Y-axis) and the daily intake indicates that for increasing levels of daily calcium intake, the 24-hour recall of calcium intake increasingly underestimated calcium intake compared with the CFFFQ (r = 0.41, P < .001). Figure.Scatterplot of agreement between the calcium-focused food frequency questionnaire (CFFFQ) and an interview for 24-hour recall of calcium intake. Estimated calcium difference is the difference in calcium intake found between the CFFFQ and the 24-hour recall. Average calcium intake is the average of calcium intake between the 2 methods. Open circles represent white women (n = 139) and closed circles represent black women (n = 46). The parallel solid and the broken lines represent the mean difference (2 SD) for all participants (221 [528] mg).The figure shows the lack of agreement between the 2 methods of assessing total daily intake. P < .001, r = 0.41.\n\nBODY.RESULTS.CALCIUM ADEQUACY ASSESSMENT:\nThe prevalence of inadequate intake (<800 mg) was 56% (n = 103) using the 24-hour recall and 45% (n = 83) using the CFFFQ method. Examining the cases where intake would be inadequate as measured by both tools, 64 (35%) women would be classified as having inadequate calcium intake. However, 39 (21%) women whose intake would be classified as adequate by the CFFFQ would be classified as inadequate by the 24-hour recall method. Only 19 (10%) who would be classified as having inadequate intake by the CFFFQ would have adequate intake as classified by the 24-hour recall. Compared with the self-reported 24-hour recall, the CFFFQ indicates more specificity for lower intakes.\n\nBODY.RESULTS.COMPARISON OF INTAKE AND SOURCE BETWEEN BLACK AND WHITE WOMEN:\nRegardless of dietary assessment method used, white women had higher calcium intakes than black women. When using the CFFFQ, white women reported consuming approximately 43% more calcium than did black women (mean [SD] 1,104 [632] mg for white women vs 768 [531] mg for black women, P < .001). When using the 24-hour recall method, mean calcium intake for white women was approximately 52% higher than intake for black women (875 [429] mg vs 573 [365] mg, P < .001). From the CFFFQ, the primary calcium source was dairy products (55% for black women and 57% for white women). White women obtained more calcium (630 [423] mg) from dairy than did black women (424 [373] mg, P < .004). Grains were the second highest calcium source, although grains provided a much lower percentage than dairy (13% of total calcium for each racial group). Calcium from grains primarily came from fortified foods. Dairy was also the primary calcium source when the 24-hour recall data were analyzed, and a significant difference in mean (SD) dairy calcium intake between racial groups was found (243 [273] mg for black women, 444 [360] mg for white women, P < .001).\n\nBODY.DISCUSSION:\nCalcium intake has received increased attention in the last decade because of its role in bone health and as a modifiable risk factor for osteoporosis. The number of calcium-fortified food products being developed and marketed has increased substantially (9). Although labeling for \"excellent\" (>200 mg/serving) and \"good\" (100-200 mg/serving) sources of calcium are regulated by the Food and Drug Administration, there is no federal regulation regarding which foods can be fortified with calcium or the degree of fortification (13). Together, these factors make discerning calcium intake difficult for researchers, clinicians, dietitians, and consumers. The primary findings of this study are that the CFFFQ identifies low calcium intakes and identifies key sources of calcium, including calcium-fortified foods. Other studies have included either no calcium-fortified foods (14-18), calcium-fortified mineral water (19), calcium-fortified juice alone (20), calcium-fortified juice and a grain product (21), or did not provide details regarding the inclusion of calcium-fortified foods in the assessment tool (5,7,22-26). In contrast, the CFFFQ is unique in providing 14 fortified food choices, including 1 in fruits, 8 in grains, and 5 in \"other\" food categories. Fortification of foods can contribute greatly to total calcium intake. From our data, inclusion of calcium-fortified foods in the diet has the potential to increase total daily calcium intake by 1,000 mg or more. Because calcium fortification of food products is not federally regulated, food items being fortified and levels of fortification vary at the discretion of the manufacturer. The lack of regulation may hinder the role that these foods can play in dietary calcium intake by affecting the consistency of the calcium content of the product. The second unique feature of this study is the categorization of calcium intake estimates into food groups. Although total calcium intake is the primary focus in nutritional assessment studies, the effectiveness of nutrition education and osteoporosis interventions can be enhanced by focusing on calcium intake from the food groups the participants usually get their calcium. This information can be used to design more realistic and targeted nutrition education messages. In our study, the largest percentage of total calcium for both white and black women came from dairy, followed by grains and fruits. Although the grain category contained several fortified food items, the fruit category contained only 1 fortified food item. Even though consumption of fortified foods was assessed, dairy was still the primary calcium source found in this study. An analysis (15) of food intake from the US Department of Agriculture's 1994-1996, 1998 Continuing Survey of Food Intakes by Individuals (CSFII) found that dairy contributed 42% of total calcium intake. A higher percentage of dairy contribution would have been expected from that study because it included no competing calcium-fortified foods. In addition, the CSFII reported 21% of total calcium from calcium-rich mixed foods (ie, foods with 2 or more items with calcium such as sandwiches or pizza). In our study, this category contributed only 4% to 5%. Ward et al (21) also reported that most calcium intake was derived from dairy, but a comparable calcium-rich mixed foods group is not identified. The food frequency assessment tool used by Ward et al (21) also ranked the fruits and vegetables group and grains as contributing 11% to 15% of total calcium, whereas the diet records used in the study estimated this contribution at 6% to 13%. Cook et al (15) included no calcium-fortified foods, and the Ward et al (21) study was limited to calcium-fortified juice and 2 grain products. There is a scarcity of information in the literature regarding calcium intake for black and white women. In our study, mean total calcium intake among black women did not meet the adequate intake value whereas it did among white women. Other studies conclude that calcium intake is greater in white women than black women (27,28). However, an article published from the parent study of our study using the same CFFFQ found no difference in total, daily, dietary, or supplemental calcium (8). In that analysis, black and white women were matched on age, socioeconomic status, and education level (n = 33 per group). The findings suggest that racial differences in calcium intake are somewhat influenced by socioeconomic status and education level. Similarly, a comparison between black, white, and American Indian women also reported no significant difference in dietary calcium between black and white women (7). In our study, grains were the second-highest calcium source for both black and white women. Grains are predominately calcium-fortified sources. This is consistent with findings from our previous work, which determined that dairy was the greatest source of calcium but more for white women, whereas black women consumed greater amounts of calcium-fortified grains (8). The prominent role of calcium-fortified foods in these studies reinforces the need for these sources to be carefully evaluated when measuring calcium intake.\n\nBODY.DISCUSSION.LIMITATIONS:\nThe values from the CFFFQ were typically higher than those from the 24-hour recall, with the exception of vegetables and \"other\" foods. This finding may be due in part to the limitations of using a 24-hour recall. Women may have identified foods in the CFFFQ but did not happen to consume those foods on the specific day of the recall, except for vegetables. However, Chee et al (24) found a similar trend in comparing a calcium-rich food frequency questionnaire with a 3-day diary for postmenopausal Malaysian women. Assessing dietary intake by any method has inherent limitations. Validating food frequency questionnaires can be complicated without biomarkers for comparison. Because most nutrients do not yet have reliable biomarkers, 24-hour recalls or food records are the usual standard instruments. Correlations between these methods are considered adequate within the range of 0.4-0.7 (29). Although most of the correlations between total calcium and calcium from food groups when comparing CFFFQ to 24-hour recalls are significant, only total calcium intake, dairy calcium intake, and fruits fall within this acceptable statistical range. Differences in reported calcium intake when comparing the CFFFQ and 24-hour recall could be attributed to a lack of consumer awareness of calcium fortification when responding to the 24-hour recall and a positive respondent bias on the CFFFQ. For example, participants could have responded more positively concerning calcium-fortified foods if consumption of that food was seen as a positive health behavior. Because calcium-fortified foods can essentially double calcium intake, consumer awareness of their own calcium-fortified food product consumption and interviewer probe for each of these food items are essential for an accurate estimate of calcium intake. To assist both the researcher and clinician, software for dietary assessment needs to be updated to include the calcium-fortified foods. Several food frequency questionnaires that assess calcium intake have shown strong correlation with food actually consumed (19,22-25,30,31). However, many studies report correlations but no difference between means or report findings regarding only total calcium intake and not calcium intake by individual food groups. In addition, many rapid assessment tools include either no or limited sources of calcium-fortified foods, and these foods may or may not be probed for on 24-hour recalls.\n\nBODY.DISCUSSION.CONCLUSIONS:\nOur results suggest that the CFFFQ could be used to determine inadequate intakes of calcium. Primary calcium sources for all women were dairy followed by grains, and calcium intake was higher in white women than in black women. The CFFFQ can be used to more accurately identify calcium intakes and usual calcium source, which is of interest because of the increasing availability of calcium-fortified foods. Calcium-fortified sources can go unreported in 24-hour recalls and diaries, leading to an underestimation of calcium intake. When 24-hour recalls are used to assess and quantify calcium intake, researchers and clinicians need to clarify with clients whether the foods they ate were fortified. The CFFFQ may be used to better quantify calcium intake, including both natural and fortified sources. Accurate assessment of calcium is critical in evaluating bone health risks. This information can aid in the development of effective interventions to increase calcium intake.\n\n**Question:** Compared to 24-hour recall what was the result of calcium-focused food frequency questionnaire (CFFFQ) on Estimation of underreported calcium as daily calcium intake increased?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
931
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Cluster Randomized Trial of Text Message Reminders to Retail Staff in Tanzanian Drug Shops Dispensing Artemether-Lumefantrine: Effect on Dispenser Knowledge and Patient Adherence\n\n ABSTRACT:\nArtemisinin combination therapies are available in private outlets, but patient adherence might be compromised by poor advice from dispensers. In this cluster randomized trial in drug shops in Tanzania, 42 of 82 selected shops were randomized to receive text message reminders about what advice to provide when dispensing artemether-lumefantrine (AL). Eligible patients purchasing AL at shops in both arms were followed up at home and questioned about each dose taken. Dispensers were interviewed regarding knowledge of AL dispensing practices and receipt of the malaria-related text messages. We interviewed 904 patients and 110 dispensers from 77 shops. Although there was some improvement in dispenser knowledge, there was no difference between arms in adherence measured as completion of all doses (intervention 68.3%, control 69.8%, p [adjusted] = 0.6), or as completion of each dose at the correct time (intervention 33.1%, control 32.6%, p [adjusted] = 0.9). Further studies on the potential of text messages to improve adherence are needed.\n\nBODY.INTRODUCTION:\nPatient adherence to treatment is an important step in ensuring the effectiveness of artemisinin-based combination therapies (ACTs) for malaria.1 Incomplete adherence to recommended treatment can result in poor clinical outcomes, undermine the effectiveness of case management as a tool for malaria control, and may contribute to the selection of drug-resistant malaria parasites.2,3 ACTs are first-line treatment of Plasmodium falciparum malaria in the public sector of most malaria-endemic countries, with patient adherence reported to range widely from 39% to 100%.4,5 Many patients seek care for malaria in the private retail sector.6–9 The proportion of private sector clients obtaining ACTs has increased over time as ACTs have become more widely known, and their price has fallen, particularly in settings where they have been subsidized by programs such as the Affordable Medicines Facility-malaria (AMFm).10 Although access to effective antimalarials in the private sector may have increased, drug sellers may not always provide patients with appropriate doses or advice, raising concerns about patient adherence, though evidence is very limited. Only five studies have specifically assessed patient adherence to antimalarials obtained in the private retail sector.11–15 Of these, ACTs were used only in one study by Cohen and others (2012)11 in Uganda, which reported 66% of patients seeking care from drug shops were adherent. As ACTs become more available in the private sector, it becomes increasingly important to understand patient adherence and the effects of interventions intended to improve adherence. Supporting interventions, such as shopkeeper training, have previously succeeded in increasing the proportion of patients who receive and complete the recommended dose of non-ACT antimalarials,13,16 but such interventions have yet to be tested on a national scale or applied to ACTs. Mobile phones are a promising tool for the delivery of healthcare interventions as coverage of mobile networks and handset ownership increases.17–19 Text messaging, the least expensive mobile phone function, has been used in malaria control settings for commodity monitoring, disease surveillance, and pharmacovigilance.20 In addition, a trial in public health facilities in Kenya21 showed that 6 months of text message reminders improved public health workers' management of pediatric malaria by 24% points immediately after the intervention, and the improvements were sustained for at least 6 months after the intervention was withdrawn. The text message reminders were well accepted by health workers,22 inexpensive, and cost-effective.23 Given the concerns over inadequate patient adherence to ACTs delivered through the private retail sector, and the potential benefit of text-message interventions to enhance adherence, we designed and completed a cluster randomized trial in southern Tanzania to assess the effect of text message reminders to drug shop workers on patient adherence to artemether-lumefantrine (AL). We also evaluated the effect of text messages on dispenser knowledge and advice. The private retail sector in Tanzania is an important source of treatment of malaria,24,25 and ACT availability in such outlets increased after the implementation of AMFm in 2010. Another key intervention in Tanzania's private sector has been the creation of accredited drug dispensing outlets (ADDOs) by the Tanzania Food and Drug Administration (TFDA) to improve regulation of drug shops and quality of medicines. ADDOs are drug shops that have been upgraded through a process of training and certification and are allowed to sell a limited number of prescription-only drugs, including some antibiotics and ACTs.26,27\n\nBODY.METHODS.STUDY SETTING.:\nThe study was conducted in Mtwara, a rural region in southeastern Tanzania with 35.5% of the population in the lowest national wealth quintile.28 Prevalence of malaria among children 6–59 months of age in Mtwara was 17.4% in the 2011–2012 HIV/AIDS and Malaria Indicator Survey29 and 23% in a survey of patients seeking treatment at private drug shops.30 AL has been recommended as the first-line treatment of malaria since 2004, although it was not available in public health facilities until 2006. The recommended treatment regimen is six doses of AL over 3 days, with 1–4 tablets (20 mg artemether/120 mg lumefantrine) per dose depending on the weight/age band. National guidelines state that the second dose should be taken 8 hours after the first dose, followed by the remaining doses morning and evening of the second and third days.31 In Mtwara, ADDO accreditation commenced in 2006, and officially only accredited drug stores are allowed to function. In reality, a large number of non-accredited shops exist because of a lack of training, unpaid fees, or administrative delays. These shops are tolerated by regulating authorities and considered \"prospective ADDOs.\" Before AMFm, ACTs were not commonly available in ADDOs in Mtwara, and dispenser training on ACTs was limited, but ACT availability significantly increased after AMFm implementation, with 88% of ADDOs stocking ACTs in Mtwara in August 2011.32 To support AMFm roll out, TFDA offered a 1 day refresher training that included treatment of malaria with ACTs to dispensers with previous nursing training in Mtwara in August 2011.\n\nBODY.METHODS.SAMPLE SIZE CALCULATIONS.:\nWe based the sample size for this two arm cluster randomized trial on data from the public sector in Tanzania, where patient adherence to AL was 65–98%(Khatib R, unpublished data).33–35 We assumed lower adherence to AL obtained at ADDOs in Mtwara (60%) and powered the study to detect a 15% point increase in the intervention arm. We wanted to recruit a small number of patients per cluster to reduce the potential bias caused by increasing community awareness of the study's objectives. Assuming a coefficient of variation of 0.25, 80% power, 5% significance, and 20% loss to follow-up, 13 patients from 36 outlets in each arm were required, a total of 468 patients per arm.\n\nBODY.METHODS.SELECTION OF STUDY ADDOS.:\nIn May 2012, we conducted a census of all drug shops in Mtwara (ADDOs or prospective ADDOs). Data were collected on the characteristics of owners and dispensers, ACT stocks and sales, and global positioning system (GPS) coordinates. ADDOs were excluded from the sampling frame if they had sold fewer than five antimalarial treatments in the previous week, no dispensers used a mobile phone, the shop was located on the border with Mozambique or was not accessible, or the owner refused to participate (Figure 1). The final sampling frame consisted of 131 ADDOs. The 82 ADDOs were selected sequentially at random, with any ADDOs within 400 m of a selected ADDO, or any ADDOs where staff from a selected ADDO also worked, removed from the sampling frame. The selected ADDOs were then stratified by location in urban or rural wards, and the intervention was randomly allocated to 29 of 57 urban ADDOs and 13 of 25 rural ADDOs. Figure 1.Consort-like style flow diagram of trial.\n\nBODY.METHODS.INTERVENTION DESIGN.:\nWe designed seven content messages about advice that dispensers should provide when dispensing AL (Figure 2). The messages were derived from the government refresher training booklet and reflected the recommended practices for dispensing AL. Messages were pilot tested in ∼20 ADDOs in a semi-urban district outside of Dar es Salaam. Dispensers at these ADDOs were sent each potential message in turn and asked to explain their understanding of the meaning and relevance of each message. Dispensers were also asked if they would find receipt of the messages helpful, how often they would like to read the messages, and whether complementary components such as quotes or questions would encourage reading. Phrasing and frequency of the messages were adjusted based on feedback received during the pilot. Figure 2.Content of text messages sent to dispensers in the intervention arm. Before sending the first message, the 42 ADDOs in Mtwara randomized to the intervention arm were visited to invite participation and collect an updated list of mobile numbers for all dispensers. Messages began in July 2012 and were sent in random order once per day Monday–Friday for the first 4 weeks, followed by once per day Monday, Wednesday, and Friday for the next 10 weeks. Messages were written in Swahili and scheduled in advance on an automated platform, with each message paired with a different complementary component each time to promote interest. Complementary components consisted of inspirational quotes or proverbs or, once per week, a quiz question on message content that earned correct respondents free air time (500 TSH or $.30). Over the 14-week period, 49 messages were sent to each of 60 dispensers, and detailed delivery reports were kept for each message.\n\nBODY.METHODS.DATA COLLECTION.:\nFrom September through November 2012, dispensers at ADDOs in the intervention and control arms were visited by study supervisors and given a standard introduction about study objectives. To limit patients' awareness of our primary interest in adherence, which could have led to a biased assessment, dispensers were told we were studying how patients chose to treat fever and would visit some, but not all, patients at their homes. They were asked to fill out a registration form for all patients purchasing any treatment of fever, including the day and time of their ADDO visits, the patients' names, the drugs purchased, and a description of where the patients lived. Dispensers were provided with blister packs of AL that they could then sell to patients needing treatment of malaria. Study staff visited ADDOs every day to check and collect registration forms for 1–3 weeks, or until 12–15 patients purchasing AL were registered if quicker. Eligible patients who obtained AL were identified from the registration forms and assigned patient identification numbers (recorded on follow-up forms). Patients were followed up ∼68–72 hours after their ADDO visit, according to a predefined schedule, and all attempts to locate and interview patients were recorded. Where written informed consent was given, patients or their caregivers were asked about demographic and socioeconomic characteristics, treatment-seeking history, illness symptoms, detailed information about each dose of AL taken and the advice provided by the ADDO dispenser. Blister packs were requested for a pill count, and blood samples were collected for a blood smear and a malaria rapid diagnostic test (mRDT) (Pf-specific from ICT Diagnostics, Cape Town, South Africa). Blood smears were stained in the field and transported to the Ifakara Health Institute, where they were double-read by two microscopists blinded to results from each other and the mRDT, with discrepant results read by a third microscopist. Adherence was defined in two ways.5 Patients were considered to have \"verified completed treatment\" if they reported taking all doses by the time of the follow-up visit and a pill count verified that no pills remained in the blister pack, if available. The second, more stringent definition included a time component, based on patient reports of the time each dose was taken using the Swahili times of day: \"alfajiri\" (early morning), \"asubuhi\" (morning), \"mchana\" (afternoon), \"jioni\" (evening), \"usiku\" (night), and \"usiku sana\" (late night). Patients were considered to have \"verified timely completion\" if they took the second dose at the Swahili time of day corresponding with 8 hours after the first dose, and then took each of the remaining doses at the Swahili time of day corresponding with 12 hours after the previous dose, verified by the absence of pills in the blister pack. For ease of reading, we hereafter refer to these definitions as \"completed treatment\" and \"timely completion.\" After completion of patient interviews, dispensers working at the study ADDOs were interviewed on ADDO characteristics, demographics, receipt of study text messages, and advice they would give to patients when dispensing AL. To assess knowledge corresponding with message content, we used an open-ended question, e.g., \"I would like to ask about which advice you think you should provide to a person of any age taking treatment for malaria. For the following topics (e.g., \"when to take the second dose\" or \"what to do with the pills if the patient feels better,\" etc.) tell me if advice on this topic is important or not, and if so, what advice you would provide.\" Responses were recorded verbatim and evaluated by the study leader using predetermined criteria based on message content.\n\nBODY.METHODS.DATA ENTRY AND ANALYSIS.:\nAll patient and dispenser interview data were collected using personal digital assistants, and data extracted from study forms (census, registration, and follow-up forms) were double entered into Microsoft Access databases (Microsoft Corp., Redmond, WA). Data were analyzed in Stata 11.0 (Stata Corporation, College Station, TX). Primary outcomes were analyzed by intention-to-treat. Comparison of adherence was based on a t test of the proportion adherent in each cluster. A list of potentially important confounders was identified a priori consisting of ADDO accreditation certificate, number of customers purchasing ACTs in the previous week, dispenser medical qualification, dispenser training on ACTs, patient age, and patient education. Adjustment for variables on this list found to be unbalanced between arms was performed by fitting a logistic regression model to the individual data and performing analysis on the aggregated residuals, as described by Bennett and colleagues.37\n\nBODY.METHODS.ETHICS.:\nAll questionnaires, consent forms, and other study documents were translated into Swahili and piloted before use. Written informed consent was collected from dispensers before census, patient registration and interview, and from patients or their caregivers prior to interview. The study protocol was approved by the ethical review boards of Ifakara Health Institute and London School of Hygiene and Tropical Medicine. The Centers for Disease Control and Prevention (CDC) investigators provided technical assistance in design and analysis but were not engaged in data collection. The trial is registered with Current Controlled Trials, ISRCTN83765567.\n\nBODY.RESULTS:\nFigure 1 shows the trial profile. Of the 82 randomized ADDOs, 37 from the control arm and 40 from the intervention arm participated in the study. The number of registered patients eligible for follow-up was 537 in the control arm and 518 in the intervention arm, with ∼15% of patients in each arm lost to follow-up. Most outlets were in urban wards (70% in both arms), had a single dispenser, and had at least some ACTs in stock on the day of interview (Table 1). Of 51 dispensers in the control arm and 59 in the intervention arm, ∼80% in both arms were female and had a low-level medical qualification, mostly nurse assistants (Table 2). Though low in both arms, more ADDOs in the control arm compared with the intervention arm were able to show an accreditation certificate (43% versus 20%, respectively). However, the difference in the percentages of dispensers that had received training on ACTs was not as pronounced (69% in the control arm versus 60% in the intervention arm), though the median year of training was more recent in the control arm (2011 versus 2009). Characteristics of patients were well balanced between arms (Table 3). A high percentage of patients (36% in the control arm and 38% in the intervention arm) had sought care before attending the study ADDO, with most patients going to a kiosk or general shop and only 7% of patients in the control arm and 4% in the intervention arm going to a public health facility. Approximately 90% reported symptoms of fever or headache, and approximately half had upset stomachs or nausea. Based on an mRDT taken at the time of follow-up, 28% in the control arm and 25% in the intervention arm tested positive, with only 1.4% and 1.6%, respectively, testing positive by study blood smear. (Some degree of discrepancy is expected because of the persistence of HRP2 detected by the mRDT.) Seventy percent of dispensers received at least 75% of the text messages. The median percentage of messages received was 86%, with 20% of dispensers receiving no messages (Figure 3). Table 4 presents results of the dispenser interviews on knowledge of advice to provide patients when dispensing AL. Dispensers in the intervention arm reported slightly better knowledge of the correct AL regimen for adults in the intervention arm compared with the control arm (90% versus 78%; adjusted prevalence ratio [aPR] = 1.2 (95% confidence interval [CI]: 0.95, 1.5); p [adjusted] = 0.0748), though knowledge of the correct regimen for a child aged four weighing 20 kg was lower than for adults in both arms (75% versus 64%; aPR = 1.2 [0.85, 1.7]; p [adjusted] = 0.2). Dispenser knowledge was considerably higher in the intervention arm than the control arm on advice to take AL with fatty food (60% versus 20%; aPR = 3.4 [95% CI: 1.6, 7.1]; p [adjusted] < 0.0001) and to continue to take AL if minor side effects occur (68% versus 43%; aPR = 1.6 [95% CI: 1.0, 2.4]; p [adjusted] = 0.0188). However, surprisingly high dispenser knowledge (87–99%) was recorded in both arms on advice to complete treatment even if feeling better, advice to return to the ADDO or go to a health facility if the condition worsens, and advice to take the second dose after 8 hours (Table 4). Knowledge on advising patients to take a replacement dose in case of vomiting within half an hour of taking a dose was lower (55% versus 50%), with no difference observed between arms. Figure 3.Percentage of text messages received by dispensers in the intervention arm. Table 5 shows that ∼60% of patients in both arms reported being told how to take AL correctly, with similar percentages reporting being told to take the second dose after 8 hours and to complete treatment even if feeling better, indicating that dispensers were providing some advice even in the absence of the intervention. However, no differences were found between control and intervention arms for any piece of advice. Less than 5% of patients in both arms reported being told about vomiting, side effects, or taking AL with fatty food, and < 10% in both arms took the first dose of AL at the ADDO. There was no difference in patient adherence between arms (Table 6). Completed treatment was 70% in the control arm and 68% in the intervention arm (adjusted risk ratio [aRR] = 0.96 [95% CI: 0.82, 1.1]; p [adjusted] = 0.6), with a similar percentage of patients adherent to each of the four age-appropriate blister packs and no important differences between arms. The mean number of doses taken by non-adherent patients was four in both arms, and the most common reported reasons for non-adherence included planning to take the medication later, forgetting to take the tablets, feeling better, and other reasons (Figure 4). Timely completion was much lower, with 33% of patients in both arms taking all doses at appropriate times. A per protocol analysis, excluding patients attending ADDOs where at least one dispenser did not receive any messages, had no impact on results. Figure 4.Reasons given by patients/caretakers for not completing treatment.\n\nBODY.DISCUSSION:\nWe have reported results from a cluster randomized trial of a text message intervention directed at drug shop dispensers to improve patient adherence to ACTs in Tanzania. The intervention increased dispenser knowledge of some components of advice to provide patients when dispensing AL, but knowledge of other components was already very high in the absence of the intervention. The improvements in knowledge did not translate into an increase in information patients reported receiving, even though patients commonly reported receiving some advice. There was no difference in adherence of patients to the ACT regimen between the arms. Adherence by completed treatment was < 70% to ACTs obtained from ADDOs, comparable to the 66% adherent by the same definition in the study by Cohen and others (2012)11 in the private retail sector in Uganda. However, timely completion was only 33% in our study, indicating that even patients who complete treatment may do so with poor adherence to the recommended schedule. Both of these results are comparable to reported adherence to ACTs obtained from public health facilities, where studies under real life conditions have found adherence of 64–77% for completed treatment verified by pill count and 39–75% for timely completion verified by pill count,5 including one study from Tanzania.35 Two other studies from public health facilities in mainland Tanzania using different definitions and methods have reported higher adherence (88.3% and 90%).33,34 Although the text message intervention targeting dispensers was not effective at improving patient adherence, there was a marked increase in dispenser knowledge of advising patients to take AL with fatty foods or milk and to continue AL even if minor side effects occurred. However, knowledge in both arms was surprisingly high, particularly on advising patients to take the second dose after 8 hours, to complete treatment even if feeling better, and to seek further care if the condition worsens. This could reflect the recent ADDO trainings in Mtwara, raising the possibility that the intervention's impact could have been different in the absence of recent training. Knowledge did not necessarily result in the provision of advice, even though some advice was provided. For example, 98% of dispensers in both arms knew it was important to advise patients to complete treatment even if feeling better, but only 60% of patients reported receiving this advice. Other advice was much less commonly provided, with < 5% of patients in both arms reporting being advised on what to do in case of minor side-effects or vomiting, even though dispenser knowledge of this advice was much higher. This may be because the dispensers did not deem the advice helpful to their business or to the patients, as it could heighten a negative perception about the effects of their products. Dispensers may have also perceived that clients were in a hurry or not receptive to advice. Alternatively, patients or caretakers may not have recalled the advice given to them several days before. Exit interviews or mystery shopper surveys may have been useful in assessing whether advice was communicated, but these methods also have limitations, such as greater potential for a Hawthorne effect and ethical challenges. We intentionally avoided telling dispensers that the purpose of our study was to improve patients' adherence. Although dispensers receiving text messages were aware that the content focused on advising patients about the correct use of AL, we did not mention the objective to dispensers in either arm to avoid this information being relayed to patients, who might then change their behavior because they expected their adherence to be monitored.5 However, if this intervention were to be scaled up outside the study context, one might include greater emphasis on adherence and its value in communications with dispensers, which might in turn increase the likelihood that they would provide appropriate advice. The evaluation of patient adherence relied on self-reported data from patients or their caregivers, which may be subject to recall and social desirability bias. We inspected blister packs for the 80% of patients who could provide them and identified only 10 patients (1%) that had reported completing all doses but had pills remaining. On the other hand, patients may have removed pills from the packaging to consume at a later time. Even though Swahili times of day, based on sunrise and sunset, were used to assess timely completion, patients or their caregivers may not have remembered when each dose was taken or may have provided the expected responses to avoid being seen as negligent. A similar text message intervention targeting health workers in public health facilities in Kenya found significant improvements in health worker case management of pediatric malaria.21 The primary outcome measure by Zurovac and colleagues included completion of four treatment tasks (e.g., prescribing AL) and at least four of six dispensing and counseling tasks, of which the biggest improvements were seen in giving the first dose at the health facility and advising patients to take the second dose after 8 hours, take each dose after a meal, and what to do in case of vomiting. Although we found strong evidence in the intervention arm of improved dispenser knowledge of advice to take each dose with a fatty meal, we recorded high dispenser knowledge in both arms of advice to take the second dose after 8 hours and no difference between arms in knowledge of advice on what to do in case of vomiting. We also recorded < 10% of patients in either arm taking the first dose of AL at the ADDO, even though drinking water was available at many ADDOs. The contrasts between our findings and those of Zurovac and colleagues could reflect the private drug shop setting, as we found patients' relatives often seek care at ADDOs on behalf of patients, in contrast to public health facilities where patients themselves must be present for a clinical exam. Health workers in public health facilities may also be more accustomed to taking on advisory roles than dispensers in private drug shops and less concerned with making a profit.37 Interventions involving training of dispensers in the private retail sector, although limited in number, have improved dispenser knowledge across a range of diseases and settings, but the impact of improved knowledge on dispenser and patient behavior has been mixed.37,38 Even fewer studies have reported effects of an intervention targeted at retail dispensers on patient adherence to antimalarial drugs. One study from 1998–2001 in Kilifi, Kenya found that trained shopkeepers were willing to take on an advisory role, resulting in both increases in advice and the proportion of patients taking adequate doses of chloroquine and sulfadoxine pyrimethamine or sulfadoxine-pyrimethamine.14 Although receiving instructions has been associated with patient adherence to antimalarials in several studies in the public and private sectors,11,39,40 other factors might also influence patient adherence, including patient education, higher socioeconomic status, treatment-seeking behavior, understanding the instructions, knowledge and perceptions of the illness or of the drug, and satisfaction with information received or with the drug.5 The private retail sector is likely to continue to be an important source of treatment of malaria and there is a need to maximize patient adherence to ACTs. Given the effectiveness of text message reminders on health worker case management in Kenya and the low cost of this intervention, there is potential for further evaluations of text message interventions targeted at dispensers in the private retail sector to improve dispenser knowledge, advice provided, and patient adherence, particularly in settings where dispensers have not received recent training on malaria. Such interventions should ensure message content addresses gaps in dispenser knowledge and would benefit from additional research on dispenser readiness to provide advice, and client receptivity to their advice. There should also be further consideration of how best to design the evaluation so that dispensers are motivated to communicate the importance of adherence without biasing study results. The double gap between dispenser knowledge and providing advice and then patients receiving advice and being adherent may also call for other interventions to enhance adherence. Text message reminders to patients have been shown to be a low-cost approach to improve patient adherence to antiretroviral therapy for HIV41–43 and have been used in two recent studies to increase patient adherence to malaria test results and treatment (Goldberg J, unpublished data).44 However, scaling up a text message intervention targeted at malaria patients in Tanzania would require an increase in personal mobile phone use among patients most at risk of malaria,45 as only about half of the households in rural areas own a mobile phone,29 and the phone may be shared among household members. In contrast, nearly all dispensers censused in Mtwara regularly used at least one mobile phone. Other interventions that have been shown to improve patient adherence to antimalarial drugs include packaging and community education.46 ACTs are now mostly available in factory packaged unit dose packs blister packs with illustrated instructions, therefore additional room for improvement may be limited. One possible modification could be improved instructions in local languages. Community education through communication campaigns could be helpful in emphasizing the importance of taking all doses, but it may be challenging to communicate the details of when and how to take each dose to the general population. Finally, the introduction of mRDTs in the private sector might have positive implications for patient adherence, especially if also combined with dispenser advice.5\n\nBODY.CONCLUSION:\nText message reminders improved some aspects of dispenser knowledge of advice to provide to patients when dispensing AL in the private sector. However, patients in the intervention arm were not more likely to report receiving improved advice and did not have higher adherence than patients in the control arm. Adherence to AL among patients in both arms was suboptimal, highlighting the need for studies evaluating other interventions to improve adherence to ACTs obtained in the private retail sector.\n\n**Question:** Compared to no text message what was the result of receive text message reminders about what advice to provide when dispensing artemether-lumefantrine (AL). on completion of each dose at the correct time?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
932
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of transcranial direct current stimulation on temperature and pain perception\n\n ABSTRACT:\nTranscranial direct current stimulation modifies cortical excitability and in consequence some cerebral functions. In the present study we aimed to elucidate whether tDCS could affect temperature and pain perceptions in healthy subjects testing different stimulation parameters. A total of 20 healthy subjects were studied by means of quantitative sensory testing. Two different experiments were performed. First, we studied the effects of 15 minutes 2 mA anodal transcranial direct current stimulation applied over left M1 and parietal cortex in two separated sessions. Then, we tested the effects of 5 minutes tDCS over M1 by means of a sham controlled design to optimize the possibility to study minimal effects of tDCS using different polarities (cathodal and anodal) and intensities (1 and 2 mA). 2 mA anodal tDCS, when applied for both 15 and 5 minutes over the motor cortex, increased cold perception threshold. Conversely, motor cortex cathodal tDCS modulated cold perception threshold only when 1 mA intensity was used. M1-tDCS can modify the temperature perception; these effects are polarity and intensity dependent. As stimulation intensity seems critical to determine the effects, we suggest that for clinical application strong anodal tDCS (>1 mA) or weak cathodal tDCS (<2 mA) should be used for pain control.\n\nBODY.INTRODUCTION:\nTreatment of chronic pain (CP) is often difficult and satisfactory pain control is not always achieved. However, invasive motor cortex stimulation (MCS) and non-invasive brain stimulation (NIBS) techniques have emerged as a potential therapy and have been reported to be highly successful in a proportion of treated patients1. However, although MCS is generally effective in pain relief, a conspicuous number of patients are not-responders. A high number of not-responders is unwanted and favours the expansion of the less expensive NIBS techniques over the neurosurgical techniques. Non-invasive methods of brain stimulation, including repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS) are able to induce long lasting effects within the human motor cortex2–4 and have been used to treat pain. Anodal tDCS over primary motor cortex (M1) promotes an increase in cortical excitability, while cathodal tDCS induces a reduction of cortical excitability5, furthermore it was shown that the longer the duration of tDCS, the longer the induced after-effects6, 7. As far as it concerns cathodal tDCS, increasing intensity from 1 mA to 2 mA can switch motor cortex excitability inhibition into excitation8. Also, as intensity increases, the induced electric field goes deeper into the brain so that it is conceivable that the orientation and distance of the axonal or dendritic-somatic axis with respect to the electrical field could affect the resulting biological and clinical effects unexpectedly9. Finally, it is important to consider that little variations of electrode size area, shape, or placement (montage) can strongly influence the \"diffusion\" of the current and the geometry of the induced DC fields into the brain10–13. Many prior studies have evaluated the effects of tDCS on different temperature and pain perception in healthy volunteers. A list of the most relevant works on this issue is given in Table 1\n14–21. It should be noted a great divergence across studies about the effects of tDCS on the studied parameters. For this reason further research is warranted.Table 1Most relevant papers about the effects of tDCS on temperature and pain perception in healthy volunteers.ArticlePolarityStimulation Intensity and durationEvaluation protocolResultsTargetBorckardt et al.14\nAnodal Sham HD-tDCS2 mA, 20 minutesQSTAnodal HD-tDCS decreased heat and cold perception thresholds with no effects on heat pain threshold and just a small effect on cold pain thresholdM1Bachmann et al.15\nAnodal Cathodal Sham1 mA, 15 minutesQSTCathodal tDCS increased cold perception thresholdM1Grundmann et al.16\nAnodal Cathodal Sham1 mA, 15 minutesQSTCathodal tDCS increased cold detection thresholds in both hands and warm detection thresholds only in the contralateral handS1Zandieh et al.17\nAnodal Cathodal Sham2 mA, 15 minutesCold Pressor TestAnodal tDCS led to increment in pain thresholdM1Jurgens et al.18\nAnodal Cathodal Sham1 mA, 15 minutesQSTNeither anodal nor cathodal tDCS significantly were able to change somatosensory and pain perceptionM1Mylius et al.20\nAnodal Cathodal Sham2 mA, 20 minutesQSTAnodal tDCS led to an increase of tolerance to heat painDLPCCsifcsak et al.21\nAnodal Cathodal Sham1 mA, 10 minutesLaser evoked potentials (LEP)Cathodal tDCS significantly reduced the amplitude of N2 and P2M1Boggio et al.22\nAnodal Sham2 mA, 5 minutesPeripheral electrical stimulation (PES)Anodal tDCS of M1 increased both perception and pain thresholds, whilst stimulation of the DLPFC increased pain threshold only.M1 DLPC V1\n Quantitative sensory testing (QST) is a standardized method that allows the evaluation of temperature and pain perceptions by means of different kinds of thermal stimuli (warm, hot inducing pain, cold and cold inducing pain) thus providing information about each of the physiological pathways involved22–24. More in detail, QST provides the opportunity for assessment for Aδ-fibres and C-fibres that are involved respectively in temperature cold and warm perception and nociception (cold pain and hot pain)25–27. To elucidate whether tDCS could affect temperature and pain perceptions in healthy subjects, we performed a double-blind study in which we evaluated the effects of 15 min 2 mA-tDCS on temperature and pain perception as measured by QST: we chose this tDCS protocol since it is commonly used to obtain pain relief in clinical applications28. In order to disclose a possible site-specificity of tDCS effects we applied the neuromodulatory protocol over M1 and parietal cortex (Pcor). Moreover, in a separate experimental session, we tested the effects of a very short tDCS protocol (5 min, cathodal and anodal, M1) by using a placebo-controlled double blind study to disclose the polarity and intensity dependency of the effects of tDCS on thermal and pain perceptions. Contemporary, by means of these experiments, we aimed to test which QST parameters can be used as a biomarker of the effects of tDCS, possibly to be used in future study about its application in pain treatment.\n\nBODY.METHODS.SUBJECTS:\nTen healthy volunteers (7 women, 3 men, mean age 31.9 ± 4.9 (SD) years, range 27–41 years) were studied by applying anodal tDCS 2 mA for 15 minutes over two different cerebral targets, M1 and Pcor. Moreover, eleven healthy volunteers (9 women, 2 men, mean age 31.5 ± 7.5 (SD) years, range 24–39 years) were studied with a very short duration tDCS protocol using a sham controlled design. Just one subject participated in both experimental protocols. All subjects gave informed consent prior to participation; the study was approved by the Ethical Committee for Clinical Research (Toledo, Spain) and was conducted in accordance with the Declaration of Helsinki.\n\nBODY.METHODS.EXPERIMENTAL DESIGN:\nSubjects were seated in a comfortable chair, with their arms and the whole body at rest. Sensory testing was evaluated prior to, immediately after, and 25 minutes after tDCS (times pre, post 0, and post 25, respectively). See Figure 1 for a schematic representation of the experimental protocols. The experimental protocols were randomly administered across the subjects and were separated by at least one week. The entire testing algorithm (temperature perception testing and pain thresholds) lasted for an average time of 12 minutes.Figure 1Schematic representation of the experimental setup.\n\n\nBODY.METHODS.MOTOR CORTEX LOCALIZATION: TRANSCRANIAL MAGNETIC STIMULATION:\nTranscranial magnetic stimulation (TMS) was performed with a Magstim 200 magnetic stimulator (Magstim Company, Whiteland, Dyfed, UK) and a figure-of-eight magnetic coil (diameter of one winding, 70 mm; peak magnetic field, 2.2 Tesla). The magnetic stimulus had a biphasic waveform with a pulse width of about 300 μs. During the first phase of the stimulus, the current in the center of the coil flowed towards the handle. The coil was placed tangentially on the scalp inducing a posterior-anterior current in the brain. We determined the optimal position for activation of the right first dorsal interosseous (FDI) by moving the coil in 0.5 cm steps around the presumed motor hand area of the left motor cortex. The site where stimuli of slightly suprathreshold intensity consistently produced the largest motor evoked potentials (MEPs) with the steepest negative slope in the target muscle was marked as the \"hot spot\".\n\nBODY.METHODS.PARIETAL CORTEX LOCALIZATION:\nTo localize the optimal position of the electrodes to stimulate parietal cortex, we used as reference the motor hand area of the right first dorsal interosseus (FDI) localized by TMS (hot spot) and moving 2 cm posteriorly in the parasagittal direction29.\n\nBODY.METHODS.EXPERIMENT 1: COMPARING THE EFFECTS OF THE ANODAL TDCS OVER THE MOTOR AND PARIETAL CORTICES:\nWe tested the effects of a protocol similar to those commonly used to obtain pain relief in clinical applications28 (anodal tDCS, 15 min, 2 mA, M1 or Pcor) on two different cortical targets. With this experiment, we tested if there are effects on QST parameters that can be used as a biomarker of the effects of the tDCS. Furthermore, we tested the effects over the parietal cortex as some studies suggest that this target can be used for pain control23. It should be noted that this experiment was developed to test if Pcor stimulation is effective in obtaining pain control and changes in temperature perception in healthy humans. We compared this site of stimulation to a commonly used protocol in pain treatment (Anodal tDCS 15 min over M1) and for this reason we did not use, in this first experiment, sham stimulation as a control condition. All the subjects underwent two different experimental sessions: (1) anodal motor cortex tDCS, 2 mA, 15 minutes, (2) anodal parietal cortex tDCS 2 mA, 15 minutes. Following the skin preparation to reduce impedance, saline-soaked sponge electrodes (5 × 7 cm) were positioned over the motor/parietal cortex and contralateral orbit, using the hot spot (for motor cortex stimulation) or 2 cm posteriorly in the parasagittal direction (for sensory cortex stimulation) as the center of the cortical electrode. Stimulation was applied for 15 min at a current of 2 mA (8 s phade in/phade out for a total stimulation time of 916 s). The current intensity and duration were within the established safety limits30. For anodal tDCS, the anode was positioned above the motor cortical representation of the right FDI while the cathode was placed above the contralateral orbit.\n\nBODY.METHODS.EXPERIMENT 2: PLACEBO-CONTROLLED STUDY ON POLARITY AND INTENSITY EFFECTS OF TDCS OVER THE MOTOR CORTEX:\nIn Exp 1, we used the M1 stimulation (commonly used protocol in pain treatment) as a control for the Pcor stimulation. As we demonstrated that Pcor stimulation was inefficient in obtaining a significant change in any studied parameters (see results), in Exp 2 we tested the polarity and stimulation intensity dependence of the effects of M1 stimulation. Thus, in Exp 2 we added a sham tDCS as a control condition, cathodal tDCS to test also a different polarity and two different stimulation intensities to test (1 mA and 2 mA). We tested the effects of a very short tDCS protocol (5 min) using a sham controlled design to optimize the possibility to study minimal effects of tDCS over the motor cortex of different polarities (cathodal and anodal) and intensities (1 and 2 mA). For this, we evaluated how tDCS applied to the motor cortex for 5 min was able to induce a change in sensory and pain perception, as measured by QST. This study was sham-controlled. All the subjects underwent five different experimental sessions: (1) anodal tDCS of 1 mA; (2) anodal tDCS of 2 mA; (3) cathodal tDCS of 1 mA; (4) cathodal tDCS of 2 mA; (5) sham tDCS. Following the skin preparation to reduce impedance, saline-soaked sponge electrodes (5 × 7 cm) were positioned over the motor cortex and contralateral orbit12, using the hot spot identified with TMS as the center of the cortical electrode. Stimulation was applied for 5 min at a current of 1 mA or 2 mA (8 s phade in/phade out for a total stimulation time of 316 s). The current intensity and duration were within the established safety limits30. Sham stimulation involved the same electrode placement and duration as the real conditions; however the constant current was delivered for only 30 s. Most subjects experienced a mild tingling sensation at the site of electrode contact that was independent of polarity and usually subsided after a period of a few seconds. For anodal tDCS and sham stimulation, the anode was positioned above the motor cortical representation of the right FDI while the cathode was placed above the contralateral orbit. For cathodal tDCS the opposite montage was used.\n\nBODY.METHODS.QUANTITATIVE SENSORY TESTING:\nQST was conducted by using a Thermal Sensory Analyzer TSA 2001-II (MEDOC, Ramat Yishai, Israel)31, 32. This device uses computer controlled Peltier elements to heat or cool a contact plate in the thermode to the desired temperature. The thermode area was 3 cm × 3 cm (9 cm2). The entire thermode stimulating surface was placed in contact with the skin-testing site and secured by a Velcro band without stretch. Cold and warm thresholds were measured by stimulating right thenar eminence. The initial resting temperature of the thermode was 32 °C and the rate of the temperature change was 0.5 °C/s down or up for cold and warm trials, respectively. Stimulus magnitude was defined in each trial as the difference between the final temperature and the resting temperature of the thermode. Room temperature was controlled and kept between 23–24 °C.\n\nBODY.METHODS.WARM AND COLD THRESHOLD DETERMINATION:\nThe method of levels (MLE) was used, because its measurements have been shown not to be influenced by reaction time33. A single stimulus of predetermined magnitude was presented and the subject indicated, after the cessation of the stimulus, whether it was felt or not. The initial stimulus step size was predetermined at a step of 3 °C (above or below the starting temperature of 32 °C). Subsequently, stimuli were decreased by intermediate steps of 1 °C, until the subject gave a negative response. The subsequent stimuli were increased or decreased by fine search steps of 0.3 °C. The direction changed according to the response, increasing for negative response (not felt), decreasing for positive response (felt). Null stimuli were randomly included. Four negative responses were required, after fine search had begun, to terminate the test. Threshold was determined by taking the mean of all results during the fine search step. Thresholds were expressed as the difference of the obtained value with respect to the starting temperature of 32 °C.\n\nBODY.METHODS.HEAT AND COLD PAIN THRESHOLD DETERMINATION:\nCold pain thresholds were measured over five trials by decreasing the temperature at a rate of 1 °C/s until the subject indicated that the temperature became painful. The threshold for heat pain was measured over five trials in which the temperature was increased until the stimulus was perceived as painful. Pain thresholds were determined by taking the average of the five successive trials. Thresholds were expressed as the difference of the obtained value respect to the starting temperature of 32 °C. The order of heat and cold detection and pain threshold measurements was randomised across participants.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nCold and warm thresholds and cold and heat pain thresholds in baseline condition were compared using unpaired t test. For this comparison only the baseline cold and warm thresholds obtained for anodal (2 measures) and cathodal (2 measures) and sham stimulation protocols have been averaged. The cold and warm perception threshold values were computed across subjects for both real and sham conditions and for the three time points (baseline, post 0 and post 25). For the experiment 1, the effects of motor and sensory cortex 15 minutes tDCS on thermal thresholds (warm and cold) and pain thresholds (hot and cold) were evaluated by means of three-way mixed model ANOVA with TIME (baseline, post 0 and post 25) as within-subject factor and LOCATION (M1 and Pcor) and MODALITY (cold perception, warm perception, cold pain and hot pain) as between-subjects factors, followed – in case of a significant interaction– by four separate two-way repeated-measures follow-up ANOVAs (TIME and LOCATION) for each studied modality. Again, when a significant interaction was found, two separate one-way repeated measures follow-up ANOVAs (TIME) were performed for each LOCATION. When significant main effects or interactions were found, Tukey's honest significant difference test was used for post-hoc comparisons. In the experiment 2, we evaluated the effects of 5 minutes anodal tDCS, cathodal tDCS and sham stimulation when two different intensities were used (1 and 2 mA). To achieve this, we performed two separate three-ways mixed model ANOVAs for 1 mA and 2 mA stimulations whit MODALITY (warm perception, cold perception, heat pain and cold pain) and STIMULATION (anodal, cathodal and sham) as between-subjects factors and TIME (baseline, post 0 and post 25) as a within-subjects factor, followed – in case of a significant interaction – by four separate two-way repeated-measures follow-up ANOVAs (TIME and STIMULATION) for each studied modality. Again, when a significant interaction was found, other three separate one-way repeated measures follow-up ANOVAs (TIME) were performed for each STIMULATION. When main effects or interactions were found, Tukey's honest significant difference test was used for post-hoc comparisons.\n\nBODY.RESULTS:\nAs previously reported by other authors using QST17, 24 baseline cold threshold was lower than warm threshold (0.68 ± 0.35 °C vs 0.92 ± 0.49 °C, p = 0.008 unpaired t test) (Fig. 2). Baseline cold pain threshold was higher than heat pain threshold (21.1 ± 5.2 °C vs 14.4 ± 2.1 °C, p < 0.001 unpaired t test) (Fig. 2).Figure 2Temperature perception thresholds and pain threshold at baseline condition. Thresholds were expressed as the difference of the obtained value with respect to the starting temperature of 32 °C.\n\n\nBODY.RESULTS.EXPERIMENT 1:\nAt baseline, cold perception threshold, warm perception threshold, cold pain threshold and heat pain threshold did not differ between the 2 experimental sessions (paired t-tests, p > 0.5). 2 mA anodal tDCS, when applied for 15 min over the motor cortex, induced a change only in cold perception threshold but not in the other studied parameters (one-way follow-up ANOVA for cold perception: F(2,18) = 3,68; p = 0.046)). In details, 15 minutes 2 mA anodal tDCS induced a significant increase in cold perception thresholds immediately after the end of the stimulation (p = 0.039) (see Fig. 3).Figure 3Temperature perception thresholds and pain thresholds for warm (hot) and cold perception (Anodal tDCS 15 min 2 mA over motor and sensory cortex). Error bars are standard deviations. *p < 0.05.\n Moreover, 2 mA anodal tDCS when applied over the Pcor for 15 minutes induced no significant changes in all the studied parameters (one-way follow-up ANOVA for cold perception: F(2,18) = 3,33; p = 0.058), we only observed a tendency to increase the cold perception (Baseline vs Post 25; p = 0.058).\n\nBODY.RESULTS.EXPERIMENT 2:\nAt baseline, cold perception threshold, warm perception threshold, cold pain threshold and heat pain threshold did not differ between the 5 experimental sessions (paired t-tests, p > 0.5). Cold perception threshold was significantly modulated by tDCS (one-way follow-up ANOVA for cold perception: F (2,20) = 4,60; p = 0.022). Particularly, when 1 mA was used, cathodal tDCS was able to increase cold perception threshold at 25 minutes after the end of the stimulation (p = 0.018) (see Fig. 4).Figure 4Temperature perception thresholds for warm and cold perception (anodal, cathodal and sham stimulation, tDCS 5 min 1 and 2 mA over motor cortex). Error bars are standard deviations. *p < 0.05.\n Anodal and sham 1 mA tDCS were able to change none of the studied parameters, and cathodal 1 mA tDCS was not able to change warm perception threshold, cold pain threshold and heat pain threshold. When the intensity of tDCS was set at 2 mA, the only modality of stimulation able to increase cold perception threshold was anodal tDCS (one-way follow-up ANOVA for cold perception: F (2,20) = 7,46; p = 0.004)). Particularly, cold perception threshold was significantly increased at 25 minutes after the end of the stimulation (p = 0.003) (Fig. 5 left upper panel).Figure 5Temperature pain thresholds for hot and cold perception (anodal, cathodal and sham stimulation, tDCS 5 min 1 and 2 mA over motor cortex). Error bars are standard deviations. *p < 0.05.\n\n\nBODY.DISCUSSION:\nThe present study shows that motor cortex 2 mA-anodal tDCS, applied for both 5 min and 15 min, increased cold perception thresholds. Moreover the absence of effects when stimulating a non-motor cortex such as Pcor were demonstrated by applying 2 mA-anodal tDCS for 15 min. Hitherto, a single session of 15 min of anodal tDCS at 2 mA delivered over the motor or parietal cortex was unable to modulate pain perceptions. Moreover, when anodal tDCS was delivered with a smaller intensity (1 mA) and for 5 minutes, no effects on temperature and pain perceptions were found. As far cold perception concerns, our findings are in line with previous studies. Indeed, Borckardt and co-workers, using a high definition tDCS montage, found that 20 min tDCS delivered at an intensity of 2 mA modified heat and cold sensory perception when anodal stimulation was used while no effects were found in heat pain threshold and just a small effect on cold pain threshold14. More in details, they reported that after stimulation lower temperatures were required to detect a change induced by cold stimuli. Furthermore, higher temperatures were required to detect a change induced by warm stimuli. On the other hand, we did not find any effects on the warm perception and cold and hot pain perception. Moreover another study described that 2 mA anodal stimulation of the primary motor area can be utilized to alleviate cold pain perception17. However it should be considered that these groups found some effects on pain and this discrepancy respect our data could be due to the use of HD-tDCS14 that is characterized by a different spatial profile of induced brain current flow. On the other hand, Zandieh et al. (2012) evaluated the effects of 2 mA tDCS over cold pain threshold by using cold pressor test, so that the different evaluation methodology could have accounted for the slight different results17. In addition, it should be considered that the effects induced by tDCS are considerably affected by the presence of a large inter-subject variability as it was shown by similar studies using other forms of non-invasive brain stimulation34. In this light, the differences between our results and those from other groups could be also influenced by this large intersubjects variability and by the relatively little size of the studied cohorts. Our data also suggest that anodal tDCS increases cold perception threshold by a dose-dependent mechanism in which intensity bigger than 1 mA is required. Furthermore we also observed that 5 minutes 1mA-cathodal tDCS applied over M1 significantly increased cold perception threshold, while no effects were found when using sham stimulation. This result was similar to the effects of 2mA-anodal tDCS. It should be noted that we found positive results only when tDCS was applied over M1 and not over Pcor. Since our main results consist in the modulation of not painful perception (i.e. cold perception), it could be expected that Pcor location was the most effective site of application. However Craig et al. found that the application of innocuous cold stimuli activates insula and not primary somatosensory cortex. Nevertheless, painful heat and cold stimuli activated the contralateral anterior cingulate cortex, contralateral primary motor, primary sensory cortex (S1), bilateral secondary sensory cortex, midinsular cortex, thalamus, and cerebellum35. Furthermore, functional neuroimaging also disclosed remote and widespread effects of tDCS and rTMS when focally applied to the primary motor or premotor cortex through fibres that project to remote cortical or subcortical structures involved in cognitive-emotional or discriminative aspects of sensorial experience and pain3, 36: thus it is not surprising that the stimulation of M1 is crucial in the modulation of thermal perception thresholds. It is important to keep in mind that the tDCS we used cannot be considered \"strictly\" focal. Indeed, due to the large size of the tDCS electrodes (35 cm2) and the montage we used, it cannot be ruled out that when tDCS electrodes are placed over M1, the tDCS stimulation can affect also the parietal cortex (and vice versa). Indeed, a non focal effect of tDCS over M1 may spread to Pcor. This seems not to be the case, at least using the polarity (anodal) and the parameters we used here, as tDCS delivered over Pcor is less effective in modulating temperature sensory perception than M1 stimulation. Hitherto, we found a tendency to increase in the cold perception threshold after 2 mA anodal tDCS over Pcor (similar to the effects of M1 stimulation) that can be explained (may be not exclusively) by a spreading of the currents towards the M1. Another interesting point from our results is that also 5 min cathodal stimulation induced an increase in cold perception threshold. Other groups found similar results, for example Bachmann et al.15 reported that cathodal stimulation of the primary motor area reduced sensitivity of A-fibers to somatosensory input (cold detection thresholds)15. Also, effects of the stimulation of the left S1 on thermal perception were found, with cathodal tDCS increasing cold detection thresholds in both hands and warm detection thresholds only in the contralateral hand16. In both cases tDCS was delivered at 1 mA intensity and cathodal stimulation was the most efficient when compared with anodal or sham stimulation. Furthermore it should be considered that the net effect of motor cortex stimulation may be a mixed effect, as cathodal tDCS may also exert a \"facilitatory effect\" by deactivating inhibitory interneurons37 thus, hypothetically explaining the similar effects induced by cathodal and anodal tDCS in the present study. We found no effect over pain thresholds. Ihle et al.,38, developed a study using functional imaging to explore the underpinnings of the previously suggested antinociceptive effects of tDCS over the motor cortex38. They found that neither cathodal nor anodal tDCS over the left M1 (1 mA, 15 minutes) significantly changed cortical nociceptive processing as a response to a heat pain paradigm when compared with sham stimulation. Only contrasting the interaction between responses to anodal and cathodal stimulation, It was found a substantial polarity-specific differences of regional brain activation after painful stimulation: anodal stimulation induced a decrease of regional Cerebral Blood Flow (rCBF), whereas cathodal stimulation resulted in an increase of rCBF in the hypothalamus, inferior parietal cortex, inferior parietal lobule, anterior insula, and precentral gyrus contralateral to the stimulation site38. In sum, here we suggest the importance of evaluating the most efficient combination of intensity/polarity and site of stimulation for better results, since at the best of our knowledge this is the only work evaluating different intensities, polarities and cortical targets in the same study. As far as the parietal cortex stimulation, we found only mild effects. We cannot exclude that longer stimulation sessions or repeated sessions may have an effects on temperature perception. More studies would be done to better clarify these considerations. We are aware that the findings obtained from healthy subjects have to be cautiously transferred to the patients. On the other hand, the effects of neuromodulation techniques in physiological conditions need to be understood before to think the multiple mechanisms that can condition the clinical response of the patients suffering for pain and sensory disturbances. We can conclude that tDCS delivered over the motor cortex can modify the temperature perception and that these effects are polarity and intensity dependent. As stimulation intensity seems critical to determine the effects, we suggest that for clinical application strong anodal tDCS (>1 mA) or weak cathodal tDCS (<2 mA) should be used for pain control (in the attempt of reducing inconsistencies).\n\n**Question:** Compared to baseline what was the result of 2?mA anodal tDCS, when applied for 15?min over the motor cortex on cold perception thresholds?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
946
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Custodiol - N versus Custodiol: a prospective randomized double blind multicenter phase III Trial\n\n ABSTRACT:\n\n\nBODY.BACKGROUND/INTRODUCTION:\nHTK-Solution (Custodiol) is a well-established cardioplegic and organ preservation solution. We currently developed a novel HTK based solution Custodiol-N which includes iron chelators to reduce oxidative injury as well as L-arginine, to improve endothelial function.\n\nBODY.AIMS/OBJECTIVES:\nIn the present first-in-human study, Custodiol-N was compared with Custodiol in patients undergoing elective coronary bypass surgery.\n\nBODY.METHOD:\nThe study was designed as prospective randomized double blind non-inferiority trial. Primary end-point was area under the curve (AUC) of creatine kinase MB (CKMB) within the first 24 hours after surgery. Secondary endpoints included, peak CKMB and troponin-T and AUC of troponin-T release, cardiac index, cumulative catecholamine dose, ICU-stay and mortality. All values are given as mean ± SD, p < 0.05 was considered as statistically significant.\n\nBODY.RESULTS:\nEarly termination of the trial was performed per protocol as the primary non-inferiority end point was reached after inclusion of 101 patients. Patient characteristics, medical history, operation and cross-clamp times did not differ between the groups. CKMB AUC (878 ± 549 vs. 778 ± 439 h*U/l, non-inferiority p < 0.001) and Troponin-T AUC (12990 ± 8347 vs. 13498 ± 6513 h*pg/ml, non-inferiority p < 0.001) was similar in both groups. Although the trial was designed for non-inferiority, peak CKMB (52 ± 40 vs. 41 ± 30 U/l, superiority p < 0.002) was significantly lower in the Custodiol-N group. Cardiac index, catecholamines ICU-stay and mortality (1 death in the control group) was similar in both groups.\n\nBODY.DISCUSSION/CONCLUSION:\nThis study shows that Custodiol-N is safe and provides similar cardiac protection as the established HTK-Custodiol solution. The significantly reduced peak CKMB levels in the Custodiol-N group may implicate a beneficial effect on ischemia/reperfusion injury in the setting of coronary bypass surgery.\n\n**Question:** Compared to Custodiol what was the result of Custodiol-N on Cardiac index, catecholamines ICU-stay and mortality (1 death in the control group)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
947
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Custodiol - N versus Custodiol: a prospective randomized double blind multicenter phase III Trial\n\n ABSTRACT:\n\n\nBODY.BACKGROUND/INTRODUCTION:\nHTK-Solution (Custodiol) is a well-established cardioplegic and organ preservation solution. We currently developed a novel HTK based solution Custodiol-N which includes iron chelators to reduce oxidative injury as well as L-arginine, to improve endothelial function.\n\nBODY.AIMS/OBJECTIVES:\nIn the present first-in-human study, Custodiol-N was compared with Custodiol in patients undergoing elective coronary bypass surgery.\n\nBODY.METHOD:\nThe study was designed as prospective randomized double blind non-inferiority trial. Primary end-point was area under the curve (AUC) of creatine kinase MB (CKMB) within the first 24 hours after surgery. Secondary endpoints included, peak CKMB and troponin-T and AUC of troponin-T release, cardiac index, cumulative catecholamine dose, ICU-stay and mortality. All values are given as mean ± SD, p < 0.05 was considered as statistically significant.\n\nBODY.RESULTS:\nEarly termination of the trial was performed per protocol as the primary non-inferiority end point was reached after inclusion of 101 patients. Patient characteristics, medical history, operation and cross-clamp times did not differ between the groups. CKMB AUC (878 ± 549 vs. 778 ± 439 h*U/l, non-inferiority p < 0.001) and Troponin-T AUC (12990 ± 8347 vs. 13498 ± 6513 h*pg/ml, non-inferiority p < 0.001) was similar in both groups. Although the trial was designed for non-inferiority, peak CKMB (52 ± 40 vs. 41 ± 30 U/l, superiority p < 0.002) was significantly lower in the Custodiol-N group. Cardiac index, catecholamines ICU-stay and mortality (1 death in the control group) was similar in both groups.\n\nBODY.DISCUSSION/CONCLUSION:\nThis study shows that Custodiol-N is safe and provides similar cardiac protection as the established HTK-Custodiol solution. The significantly reduced peak CKMB levels in the Custodiol-N group may implicate a beneficial effect on ischemia/reperfusion injury in the setting of coronary bypass surgery.\n\n**Question:** Compared to Custodiol what was the result of Custodiol-N on peak CKMB?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
949
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Continued stabilization of trabecular metal tibial monoblock total knee arthroplasty components at 5 years—measured with radiostereometric analysis\n\n ABSTRACT.BACKGROUND AND PURPOSE:\n The trabecular metal tibial monoblock component (TM) is a relatively new option available for total knee arthroplasty. We have previously reported a large degree of early migration of the trabecular metal component in a subset of patients. These implants all appeared to stabilize at 2 years. We now present 5-year RSA results of the TM and compare them with those of the NexGen Option Stemmed cemented tibial component (Zimmer, Warsaw IN).\n\nABSTRACT.PATIENTS AND METHODS:\n 70 patients with osteoarthritis were randomized to receive either the TM implant or the cemented component. RSA examination was done postoperatively and at 6 months, 1 year, 2 years, and 5 years. RSA outcomes were translations, rotations, and maximum total point motion (MTPM) of the components. MTPM values were used to classify implants as \"at risk\" or \"stable\".\n\nABSTRACT.RESULTS:\n At the 5-year follow-up, 45 patients were available for analysis. There were 27 in the TM group and 18 in the cemented group. MTPM values were similar in the 2 groups (p = 0.9). The TM components had significantly greater subsidence than the cemented components (p = 0.001). The proportion of \"at risk\" components at 5 years was 2 of 18 in the cemented group and 0 of 27 in the TM group (p = 0.2).\n\nABSTRACT.INTERPRETATION:\n In the previous 2-year report, we expressed our uncertainty concerning the long-term stability of the TM implant due to the high initial migration seen in some cases. Here, we report stability of this implant up to 5 years in all cases. The implant appears to achieve solid fixation despite high levels of migration initially.\n\nBODY:\nSwedish Knee Arthoplasty Register data suggest that uncemented tibial components have worse survivorship than cemented components (Knutson and Robertsson 2010). However, more recent data from Australia have suggested that modern uncemented designs have a similar cumulative revision rate to that of cemented components (Graves et al. 2004). With recent improvements in biomaterials, the potential for improved longevity using uncemented implants is once again being explored. Trabecular metal (Zimmer, Warsaw, IN) is a new material available for use in uncemented total knee arthroplasty. It is a porous biomaterial with morphology and mechanical properties resembling those of trabecular bone (Bobyn et al. 1999a,b, Zardiackas et al. 2001, Levine et al. 2006, Balla et al. 2010). In a previous paper, we presented the 2-year implant migration results of the trabecular metal tibial monoblock component using radiostereometric analysis (RSA) (Dunbar et al. 2009). The results of that study showed high initial migration in 9 out of 28 of the tibial components, with apparent stabilization occurring after 1 year in all cases. These results reflected the results of a similar RSA study on the same implant, performed at another center (Henricson et al. 2008). In addition, in the 2-year report we also documented apparent deformation of the tibial base plate, which occurred in 5 of the cases of high migration (Dunbar et al. 2009). The implications of the high degree of migration and plate deformation for long-term survival are not known. In a subset of the patients enrolled in the study, we also performed RSA for measurement of inducible displacement between 2 and 4 years. The results of this study showed that the trabecular metal component had the lowest inducible displacement ever reported in the literature, indicating excellent stability (Wilson et al. 2010). The results of these 3 studies supported the hypothesis that these components were achieving adequate bone in-growth for long-term survival (Henricson et al. 2008, Dunbar et al. 2009, Wilson et al. 2010). However, longer follow-up is necessary to determine whether the early stability of these implants is durable. In this paper, we present the 5-year longitudinal RSA results from the original cohort of patients randomized to receive either the Nexgen LPS monoblock (trabecular metal) tibial component (Zimmer) or the cemented NexGen Option Stemmed tibial component (Zimmer) (Dunbar et al. 2009).\n\nBODY.PATIENTS AND METHODS:\n70 subjects with severe osteoarthritis were randomized to receive either the Nexgen LPS monoblock (trabecular metal) tibial component (Zimmer, Warsaw, IN) or the cemented NexGen Option Stemmed tibial component (also Zimmer). Surgery was performed by 4 experienced consultant knee surgeons using a standardized protocol: posterior cruciate ligament resection, patellar resurfacing with a cemented inlay component, cementing of the femoral component, and RSA marker placement of 0.8mm beads. 4–6 tantalum markers were placed around the periphery of the polyethylene component; a median of 10 tantalum markers were inserted into the proximal tibia. The postoperative protocol was standardized with the use of continuous passive motion as tolerated, and patients were allowed full weight bearing immediately. No drains were used. At the 5-year postoperative time point, patients were contacted and asked if they would consent to continued participation in the study. Written informed consent was obtained from patients who had verbally agreed to participate, when they arrived for their 5-year follow-up RSA examination. This was in accordance with requirements of our Institutional Ethics Review Board. The knee was placed above a uniplanar calibration box (Medis Specials, Leiden, the Netherlands) and simultaneous digital stereo radiographs were taken with the X-ray tubes oriented obliquely. Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scores were obtained at 5 years. Full details of the original study design can be found in the previously published article (Dunbar et al. 2009). The original 2-year study was registered at ClinicalTrials.gov (NCT00405379).\n\nBODY.PATIENTS AND METHODS.RADIOSTEREOMETRIC ANALYSIS:\nRSA data analysis was performed using commercial software (MB-RSA; MEDIS, Leiden, the Netherlands). All translations and rotations were calculated to comply with the standards presented by Valstar et al. (2005). Rigid-body translations and rotations of the prosthesis were calculated about a coordinate system centered at the volumetric center of the implant, with axes aligned with the anatomical directions. The maximum total point motion of the tibial component for each case was calculated using fictive markers to standardize the calculations in cases where the prosthesis bead placement was not uniform for all subjects (Nilsson and Karrholm 1993, Nilsson et al. 1999, Valstar et al. 2005). The limit of rigid body fitting was a maximum of 0.2 mm for the tibial segment and 0.2 mm for the prosthesis segment. The condition number did not exceed 40 at any follow-up, indicating adequate distribution of beads in the rigid body (Valstar et al. 2005). The accuracy of the RSA system was assessed with a standard phantom study protocol. Accuracy was represented as half of the average width of the 95% prediction interval in a regression analysis of true and measured translations of a phantom. Precision was evaluated with double examination analysis, and represented as the 95% confidence interval of the measurements from 11 double clinical examinations performed at the postoperative follow-up. The marker configuration model-based RSA technique (Kaptein et al. 2005) was used to solve the occluded marker problem in cases where less than 3 beads were matched in an examination at follow-up. Marker configuration models were also used to test for bead loosening and possible deformation of the trabecular metal base plate. At the 2-year follow-up, implants were classified as being \"stable\" (< 0.2 mm MTPM between 1 and 2 years' follow-up) or as being \"at risk\" of early aseptic loosening (> 0.2 mm MTPM between 1 and 2 years' follow-up) (Ryd et al. 1995). Implants categorized as \"at risk\" at 2 years were re-evaluated, and if found to have less then 0.3 mm of motion between 2 and 5 years, they were considered to have stabilized. Implants considered to be \"at risk\" at 2 years with greater than 0.3 mm of motion between 2 and 5 years were still considered to be \"at risk\". Implants classified as being \"stable\" at 2 years but which showed more than 0.3 mm of motion between 2 and 5 years were classified as being \"at risk\". 0.3 mm was chosen as the threshold for migration over 3 years by extrapolating the 0.2 mm between 2-year follow-up periods, originally described as the modified continuous migration (MCM) criteria by Ryd et al. (1995).\n\nBODY.PATIENTS AND METHODS.STATISTICS:\nAnalysis of variance was used to test for differences in age, body mass index (BMI), and subjective measures between implant groups. Maximum total point motion measurements were not normally distributed; therefore, the Kruskal-Wallis test was used to investigate differences in maximum total point motion between implant groups at 5 years. Analysis of variance was used to test for differences in translations and rotations between groups at 5 years. Fisher's exact test was used to investigate differences in proportions of implants found to be \"at risk\" between groups, and Wilson's procedure was used to calculate the 95% confidence intervals of this proportion. There were 8 primary outcome metrics (maximum total point motion, 3 translations, 3 rotations, and proportion of implants at risk). A Bonferroni correction for multiple comparisons was used, giving a significance level of p < 0.006.\n\nBODY.RESULTS.STANDARD OUTCOMES AND FOLLOW-UP:\nAll standard outcome measures for the 45 patients in this study are given in Table 1. Originally, 37 patients were randomized to receive the trabecular metal component and 33 received the cemented component. Age and BMI were similar in the 2 groups at the time of admission. There were no differences in subjective measures (WOMAC) between implant groups at any follow-up point. Table 1. Subject demographics and WOMAC scores. Only matched WOMAC data from earlier follow-ups are included. Values are mean (SD) Trabecular metal Cemented p-value (n = 27) (n = 18) (ANOVA) Sex, M/F 10/17 8/10 Age 60 (8) 61 (9) 0.7 Body mass index 32 (5) 34 (5) 0.2 WOMAC preoperatively 51 (20) 44 (18) 0.2 WOMAC at 6 months 22 (23) 18 (20) 0.5 WOMAC at 12 months 15 (16) 14 (22) 0.9 WOMAC at 24 months 13 (16) 15 (14) 0.6 WOMAC at 60 months 18 (18) 19 (18) 0.8 For the 5-year follow-up, 45 patients could be reached and were willing to participate. There were 27/28 of the trabecular metal subjects and 18/21 cemented subjects with 2-year data recruited. The reasons for loss to follow-up for the other 25 patients are documented in Figure 1 and it happened mainly in the first two years of follow-up. Of the patients with a high degree of prosthesis migration in the original study, all 9 were recruited to the 5-year follow-up. All but 3 of the patients originally recruited to the study are still being followed by the operative surgeon, and there have been no revisions to date. The three patients who were exceptions had left the province. Figure 1.Consolidated Standards of Reporting Trials (CONSORT) flow diagram.\n\nBODY.RESULTS.RSA RESULTS:\nThe accuracy of the RSA system was 0.025 mm, 0.025 mm, 0.063 mm, and 0.026 mm for the x-, y-, and z- directions and maximum total point motion, respectively. The precision of the RSA system was 0.07 mm, 0.07 mm, and 0.11 mm for x-, y-, and z- translations, 0.16°, 0.15°, and 0.12° for x-, y-, and z- rotations, and 0.10 mm for maximum total point motion. At the 2-year follow-up, there were no subjects \"at risk\" in the trabecular metal group (0/28). At the 5-year follow-up, none of the trabecular metal components were found to be \"at risk\". At the 2-year follow-up, there were 4 subjects \"at risk\" in the cemented group (4/21). At the 5-year follow-up, 1 of the cemented components \"at risk\" was lost to follow-up, 2 stabilized (0.09 mm and 0.07 mm of movement between 2 and 5 years) and 1 continued to migrate (0.35 mm of movement between 2 and 5 years). 1 of the cemented implants found to be stable at 2 years migrated an additional 0.42 mm and was classified as being \"at risk\". However, retrospectively, this implant had somewhat anomalous 2-year data (0.64 mm, 0.45 mm, and 0.87 mm at 1, 2, and 5 years), where the difference between 1 year and 5 years would make the implant appear stable (0.23 mm in 4 years) but the difference between 2 years and 5 years would make the implant appear to be continuously migrating (0.42 mm in 3 years). The condition number and mean errors for this subject were within normal limits. All other implants that were found to be stable at 2 years migrated less than 0.3 mm between the 2- and 5-year follow-up, and were still classified as stable. The proportion of \"at risk\" components at 5 years was 2 of 18 (0.11, 95% CI: 0.03–0.33) in the cemented group and was 0 of 27 (95% CI: 0.0–0.13) in the trabecular metal group (p = 0.2). At the 5-year follow-up, there was no difference in MTPM between the cemented group and the trabecular metal group (p = 0.9) (Table 2 and Figure 2). MTPM values for individual subjects are shown in Figure 3. Compared to the cemented components, the trabecular metal tibial components had more subsidence (p = 0.001) (Table 2). Table 2. Summary of radiostereometric analysis (RSA) data by group and follow-up 6-month follow-up 12-month follow-up 24-month follow-up 60-month follow-up p-value a TM Cemented TM Cemented TM Cemented TM Cemented (n=30) (n=24) (n=29) (n=23) (n=28) (n=21) (n=27) (n=18) Mean maximum total point motion (range) 0.85 (0.07–3.11) 0.54 (0.12–1.99) 0.87 (0.12–3.25) 0.57 (0.16–1.87) 0.92 (0.06–3.40) 0.65 (0.18–2.12) 0.98 (0.16–3.11) 0.79 (0.06–2.16) 0.9 Lateral/medial translation (SD) –0.02 (0.13) 0.01 (0.14) –0.02(0.13) –0.00(0.14) –0.01 (0.14) 0.02 (0.16) –0.02 (0.16) 0.01 (0.26) 0.7 Superior/inferior translation (SD) –0.34 (0.31) 0.00 (0.08) –0.34 (0.34) –0.01 (0.11) –0.35 (0.35) –0.03 (0.10) –0.38 (0.37) –0.04 (0.11) 0.001 Anterior/posterior translation (SD) 0.02 (0.12) 0.02 (0.29) 0.03 (0.13) 0.06 (0.18) 0.02 (0.12) 0.09 (0.18) 0.01 (0.12) 0.08 (0.29) 0.3 Anterior/posterior tilt (SD) –0.33 (0.73) 0.02 (0.43) –0.40 (0.80) –0.02 (0.30) –0.39 (0.80) –0.05 (0.40) –0.45 (0.90) –0.11 (0.26) 0.1 Internal/external rotation (SD) 0.19 (0.38) –0.15 (0.47) 0.15 (0.39) –0.14 (0.44) 0.19 (0.45) –0.08 (0.60) 0.17 (0.45) 0.04 (0.82) 0.5 Varus/valgus tilt (SD) –0.05(0.72) 0.07 (0.32) –0.11 (0.73) 0.06 (0.32) –0.10 (0.75) 0.06 (0.33) –0.12 (0.73) –0.02 (0.20) 0.6 \n a 5-year results only Figure 2.Maximum total point motion (MTPM) (mean and standard deviation) according to the duration of follow-up in the trabecular metal and cemented groups. Lines joining data points do not indicate continuous measurement but are included to show the pattern of migration. Figure 3.Individual maximum total point motion (MTPM) according to the duration of follow-up in the trabecular metal group (blue) and the cemented group (red). Lines joining data points do not indicate continuous measurement but are included to show the pattern of migration. At the 2-year follow-up, 9 subjects in the trabecular metal group had shown very high migration in the first 6 months postoperatively (> 1.0 mm). All of these implants migrated less than 0.2 mm between 1 and 2 years and were not considered to be \"at risk\" At 5 years, all of these implants continued to show stability with an average change in MTPM of 0.10 mm over the 3 years between follow-ups. Using the marker configuration models, independent bead motion was detected in 5 of the high-migration cases, between the postoperative and 6-month follow-up examinations. The error of fit of the model marker was reduced dramatically (0.103 mm to 0.0466 mm on average) when the bead in question was removed from the analysis. The position of the bead with respect to that of the bead in the model was inferior, indicating that the motion was not due to loose beads, which would have traveled radially. The magnitude of independent bead motion was from 0.3 mm to 0.5 mm in all 5 cases. At the 5-year follow-up, there was no change in the position of the independently moving bead—indicating that the deformation of the base plate was permanent.\n\nBODY.DISCUSSION:\nIn the original 2-year report on these patients, we expressed our uncertainty concerning the long-term stability of the trabecular metal tibial implant due to the high initial migration seen in some cases. Subsequent to this, we have found stability of this implant up to 5 years and migration below the level of detection of our system in all cases. This implant appears to achieve solid fixation despite high initial levels of migration. The one cemented implant that was originally classified as being stable at 2 years but which appeared to be continuously migrating between 2 and 5 years was an unusual finding. We believe that due to random error, the MTPM of this subject was underestimated at 2 years, leading to a spurious finding of continuous migration between 2 and 5 years. However, this patient will continue to be monitored closely in the clinic. The mechanism that results in high early migration and subsequent stabilization is unknown. Previous work (Bellemans 1999) suggested that uncemented components can migrate extensively early after implantation and still form a robust interface with bone. However, migrations of such high magnitude (> 2.5 mm) are more typically associated with continuous migration (Grewal et al. 1992). In our work, all of the initially high migration stabilized by 1 year, suggesting that long-term stability can be achieved even in cases of quite extensive migration. Furthermore, our previous inducible displacement study on this patient group included subjects with a high degree of early migration, and all showed a low degree of inducible displacement (Wilson et al. 2010). Low inducible displacement suggests that bone ingrowth has occurred, and is inconsistent with fixation characterized by fibrous encapsulation. One possible explanation for this novel behavior may be related to the mechanical properties of the trabecular metal material. Due to its low elastic modulus, the trabecular metal monoblock component may deform, allowing areas of the implant that were well fixed initially to remain stable and achieve ingrowth of bone instead of lifting off as the contralateral side sinks into the bone. In support of this, we did measure permanent deformation of the trabecular metal tibial base plate in some of the high-migration cases (Dunbar et al. 2009). The most important limitation of this study was the small sample size. With only 37 patients randomized to the trabecular metal arm of the study, and of these only 27 being willing to participate in the 5-year follow-up, our ability to draw firm conclusions about the risk of revision in patients with this implant is somewhat limited. Another limitation is the modular nature of the cemented components. As the polyethylene was the part of the implant marked with RSA beads, any motion of the polyethylene component with respect to the tibial base plate would be detected as migration of the implant. It is possible that this contributed to the migration detected in some of the cemented cases. However, the monoblock construction of the trabecular metal implant makes motion between the polyethylene and base plate unlikely and less of a concern in that group.\n\n**Question:** Compared to cemented component. what was the result of The trabecular metal tibial monoblock component (TM) on maximum total point motion (MTPM)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
964
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical evaluation of the AutoPulse automated chest compression device for out-of-hospital cardiac arrest in the northern district of Shanghai, China\n\n ABSTRACT.INTRODUCTION:\nWhether the AutoPulse automated chest compression device is worthy of clinical use for out-of-hospital cardiac arrest (OHCA) remains controversial. A prospective controlled study was conducted to evaluate the effect of AutoPulse versus manual chest compression for cardiopulmonary resuscitation (CPR) of OHCA patients in the northern district of Shanghai, China.\n\nABSTRACT.MATERIAL AND METHODS:\nA total of 133 patients with OHCA who were treated at the Emergency Medical Center of the Tenth People's Hospital Affiliated with Tongji University between March 2011 and March 2012 were included. The patients were randomly assigned to the Manual CPR (n = 64) and AutoPulse CPR groups (n = 69) in accordance with the approach of chest compression received. The primary outcome measure was return of spontaneous circulation (ROSC), and the secondary outcome measures included 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge.\n\nABSTRACT.RESULTS:\nThe ROSC rate of patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). The 24-h survival rate of OHCA patients was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). The hospital discharge rate of the patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The proportion of patients with OHCA and a cerebral performance category score of 1 or 2 points at hospital discharge was higher in the AutoPulse CPR group than in the Manual CPR group, but the difference was not statistically significant (16.2% vs. 13.4%, p = 1.00).\n\nABSTRACT.CONCLUSIONS:\nUse of the AutoPulse increases CPR success and survival rates in patients with OHCA, but its ability to improve cerebral performance requires further evaluation.\n\nBODY.INTRODUCTION:\nHigh-quality cardiopulmonary resuscitation (CPR) is of great importance to cardiac and cerebral resuscitation [1–3]. However, chest compression often fails to meet relevant guidelines in terms of depth, frequency, and compression/relaxation time [4, 5]. Manual chest compression is not ideal for blood perfusion in most patients. Even when chest compression is performed by a highly trained individual, the generated cerebral blood flow is only 30–40% of the normal level and the cardiac blood supply is only 10–20% of the normal level [6]. Thus, finding a proper automated chest compression device to replace manual chest compression is necessary. The AutoPulse (ZOLL Circulation, Sunnyvale, CA, USA) is a chest compression device that was approved by the U.S. Food and Drug Administration for CPR in patients who suffer from cardiac arrest (CA) in 2001, and was approved in China by the China Food and Drug Administration in 2007. The AutoPulse is an electrical chest compression device that functions through the compression and relaxation of a load-distributing band connected to a hard board using an electric motor. The operation procedure of the AutoPulse was as follows: first, all clothing was removed from the patient, who was then placed on the hard board following the indicating line; next, both ends of the load-distributing band were fixed with a buckle to the front chest of the patient through the armpit; and finally, the device was started and the load-distributing band was automatically tightened to fit patients of different sizes. The default setting was as follows: compression strength, to reduce the patient's chest volume by 20%; and compression frequency, 80 times/min. The device was designed with continuous and 15:2 compression modes; in the latter case, compressions were performed 15 times and then paused for 3 s in each run, allowing for ventilation of the patient twice. In this research, the 15:2 compression mode was used. However, the compression frequency and two compression modes were fixed and could not be adjusted. Animal studies have shown that compared to ordinary CPR, AutoPulse-assisted CPR improved cardiac and cerebral hemodynamics as well as neurological prognosis [7, 8]. A human study showed that compared to manual chest compression, the AutoPulse significantly increased the coronary blood flow [9]. In a retrospective study, the return of the spontaneous circulation (ROSC) rate was significantly improved in patients in the AutoPulse CPR group compared to those in the Manual CPR group [10]. A large-scale multi-center prospective study found that compared with patients in the Manual CPR group, those of the AutoPulse CPR group had significantly higher rates of ROSC, hospital admission, and hospital discharge [11]. Earlier research indicated that the AutoPulse was a good choice for clinical use in patients after CA. However, a large-scale multi-center prospective controlled study found that compared with patients of the Manual CPR group, those of the AutoPulse CPR group had a decreased 4-h survival rate (28.5% vs. 29.5%; p = 0.74) and hospital discharge rate (5.8% vs. 9.9%; p = 0.6) with significantly poor neurological prognosis [12]. Because of the contradictory results from similar studies, whether the AutoPulse is worthy of clinical use in CPR for patients after CA requires more comprehensive evaluation. On the basis of the above-mentioned studies and Lerner's experience [13], we enhanced the control over the emergency treatment procedure and the professional level of emergency care personnel to minimize various interferences during the research. A prospective controlled study was designed to comprehensively evaluate the effect of the AutoPulse versus manual CPR for OHCA in the northern district of Shanghai, China. The results will provide valuable clinical evidence for the use of the AutoPulse to provide CPR for patients after OHCA.\n\nBODY.MATERIAL AND METHODS.STUDY DESIGN:\nThe study was designed as a single-center, prospective, randomized, controlled clinical trial. This clinical trial was reviewed and approved by the ethics committee of the Tenth People's Hospital in Shanghai. The inclusion criterion was: patients with OHCA who were admitted to the Emergency Medical Center of our hospital between March 2011 and March 2012. The exclusion criteria were: pregnant females, trauma patients, patients with advanced cancer, or patients aged < 14 years or > 90 years old (Figure 1). Figure 1Flow of participants in trial There are 10 ambulances in our Emergency Medical Center, which were encoded from number one to number ten optionally, then five numbers were randomly produced by a computer. The ambulances with the five numbers were equipped with an AutoPulse. The ambulances with or without the device were used in turn to visit the patients. After pre-hospital emergency treatment, all patients were sent to the Emergency Department of the Tenth People's Hospital Affiliated with Tongji University, which has an intensive care unit (ICU) and the corresponding advanced life support technologies. The patients or their families were informed of this research and signed informed consent prior to emergency treatment or after admission to the emergency room. The follow-up was terminated for patients who refused to participate in this research. All emergency personnel were familiar with this device as well as the 2010 American Heart Association (AHA) CPR guidelines. The personnel were familiar with our study design and research procedure.\n\nBODY.MATERIAL AND METHODS.RESEARCH PROCEDURE:\nThe unified regimen was as follows: the emergency personnel arrived at the scene and connected the monitor to determine the cardiac rhythm. In the case of ventricular fibrillation or pulseless ventricular tachycardia, defibrillation was performed immediately. If stable ROSC was achieved after defibrillation, then the patient was excluded from the research. If the patient failed to establish stable ROSC or the initial cardiac rhythm was not ventricular fibrillation or pulseless ventricular tachycardia, then the use of the AutoPulse was determined by the presence of this device in the ambulance. In the absence of an AutoPulse, manual chest compression was performed immediately; in the presence of an AutoPulse, 1 person performed manual chest compression while another two persons prepared the device. Endotracheal intubation was conducted simultaneously with balloon-assisted ventilation consisting of 100% oxygen. When the AutoPulse was in place, the patient was placed on the hard board in a supine position. The load-distributing band was connected and fixed as described above. The device was then started to deliver 15 compressions followed by two balloon-assisted ventilations in each run. Patients who met the inclusion criterion were included in the study. For these patients, an end-tidal carbon-dioxide (etCO2) module was connected to the intubation. Changes in the etCO2 level were monitored and recorded. Chest compression was performed on the patient continuously until ROSC was achieved or death was announced by the emergency personnel. Surviving patients were sent to the emergency room for further treatment, where chest compression was performed in the same way. Patients who demonstrated ROSC before or after admission to the emergency room were changed to a ventilator for ventilation assistance. The ventilator was operated in a mode of volume control ventilation (tidal volume, 8 ml/kg; inspired oxygen concentration, 100%). Vasoactive drugs, antiarrhythmic drugs, a defibrillator [14], and the chest compression rate of the Manual CPR group were used or performed strictly following the 2010 AHA CPR guidelines. The ROSC status of patients admitted to the emergency room was recorded. Patients who achieved ROSC were admitted to the ICU, where physicians and nurses were not allowed to know whether the patient had been treated with an AutoPulse. The following changes in patient condition were continuously recorded: 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge. Neurological prognosis was assessed in all patients achieving ROSC using Cerebral Performance Category (CPC) scores, which was repeated in those who survived at 24 h and once again in patients at the time of hospital discharge. Patients with multiple CPC scores were analyzed with data from the last scoring.\n\nBODY.MATERIAL AND METHODS.STATISTICAL ANALYSIS:\nStatistical analysis was performed using SPSS 13.0 statistical software (SPSS Inc, Chicago, IL, USA). Measurement data are expressed as mean ± standard deviation, while enumeration data are described using the number of cases (percentage). Basic and CPR data of the patients were compared as follows: measurement data using the t test and enumeration data using the χ2 or Fisher's exact probability test. Associations between treatment groups and all end points were analyzed using the χ2 test. Values of p < 0.05 were considered statistically significant.\n\nBODY.RESULTS:\nA total of 133 patients were included in this clinical trial. During the treatment, no patients were changed from the AutoPulse to manual CPR and vice versa. Patients' gender, age, and weight in the Manual CPR and AutoPulse CPR groups are listed in Table I. Weight was provided by the family of the patient or estimated by the researcher. There were no significant differences in the above indicators between the two groups (p > 0.05), nor were there significant differences detected in the witness (59.4% vs. 66.7%, p = 0.38) or composition of the cardiac rhythm types (p = 0.32).\n Table I Comparison of basic patient characteristics between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value Male 44 (68.8) 50 (72.5) 0.64 Age, mean ± SD [years] 64.2 ±12.6 62.6 ±14.9 0.53 Weight, mean ± SD [kg] 68.4 ±8.9 67.3 ±8.1 0.47 Witness 38 (59.4) 46 (66.7) 0.38 Cardiac rhythm: 0.32 Ventricular fibrillation/pulseless ventricular tachycardia 8 (12.5) 9 (13) Pulseless electrical activity 20 (31.3) 31 (44.9) Asystole 32 (50.0) 24 (34.8) Others 4 (6.3) 5 (7.2) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. \nTable II compares the CPR data of patients between the Manual CPR and AutoPulse CPR groups. There were no statistically significant differences in the number of patients receiving CPR before the arrival of professional emergency personnel between groups (25.0% vs. 27.5%, p = 0.70), the duration from the emergency call to the arrival of emergency care personnel (11.5 vs. 11.7 min, p = 0.70), or the total duration of CPR (33.4 vs. 19.6 min, p = 0.21) between groups. In the AutoPulse CPR group, the average device preparation time was 3.4 ±1.0 min. The rates of epinephrine and defibrillation utilization during CPR did not differ statistically significantly between the two groups (87.5% vs. 76.8%, p = 0.10 and 20.3% vs. 15.9%, p = 0.51, respectively). However, patients in the AutoPulse CPR group had significantly higher etCO2 than those of the Manual CPR group (22.3 vs. 16.1 mm Hg, p < 0.05). Also, the 1-h return of spontaneous breathing (ROSB) rate was significantly higher in patients in the AutoPulse CPR group than in the Manual CPR group (24.6% vs. 10.9%, p = 0.04).\n Table II Comparison of patient resuscitation characteristics between the Manual CPR and AutoPulse CPR groups Variable Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value CPR before arrival of emergency care personnel 16 (25.0) 19 (27.5) 0.74 Duration from call to arrival of emergency care personnel, mean ± SD [min] 11.5 ±3.1 11.7 ±2.8 0.70 Total CPR time [min] 33.4 (15.1) 19.6 (19.3) 0.21 AutoPulse preparation time, mean ± SD [s] * NA 52 ±23 Epinephrine 56 (87.5) 53 (76.8) 0.1 Defibrillation 13 (20.3) 11 (15.9) 0.51 EtCO 2 [mm Hg] 16.1 (5.4) 22.3 (6.1) < 0.001 1-h return of spontaneous ventilation 7 (10.9) 17 (24.6) 0.04 Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * Time to apply the AutoPulse and thus terminate manual chest compressions, which was exactly the time to place the patient on the hard board, connect and fix the load-distributing belt, and start the AutoPulse device. The patients' clinical prognostic features are shown in Table III. In terms of the primary endpoint, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). In terms of the secondary endpoint events, the 24-h patient survival rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). Similarly, the hospital discharge rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The neurological prognosis results are shown in Table IV. In the Manual CPR group, 13.4% of the ROSC patients were scored as CPC 1 or CPC 2 vs. 16.2% in the AutoPulse CPR group. Despite a relative increase in the latter group, there was no statistically significant difference (p = 1.00).\n Table III Comparison of patient clinical prognosis between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR AutoPulse CPR P -value ROSC 15 (23.4) 31 (44.9) 0.009 24-h survival 14 (21.9) 27 (39.1) 0.03 Hospital discharge 4 (6.3) 13 (18.8) 0.03 Table IV Comparison of CPC scores of patients between Manual CPR and AutoPulse CPR groups CPC score * Manual CPR ( n = 15) AutoPulse CPR ( n = 31) P -value 1 1 (6.7) 2 (6.5) 1.00 # 2 1 (6.7) 3 (9.7) 3 4 (26.7) 5 (16.1) 4 0 (0) 1 (3.2) 5 9 (60.0) 20 (64.5) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * CPC scoring of all ROSC patients # comparison of the ratio of patients scored as CPC 1 or CPC 2 in ROSC patients at hospital discharge between AutoPulse CPR and Manual CPR groups. With regard to complications, there was a lack of detailed data for comparison of injuries between groups. This is because an autopsy was refused by the families of all patients due to Chinese tradition. As required by patient condition, bedside X-ray examinations were performed upon admission to the emergency room; additionally, all deceased patients were subjected to bedside X-ray examination. The results showed that the incidence of rib fracture was 6.7% in the AutoPulse group (4/60), slightly higher than that in the Manual CPR group (4.8%, 3/63), but the difference was not statistically significant (p = 0.71). Due to the following condition of the illness, 6 patients (two in the Manual CPR group and four in the AutoPulse CPR group) underwent abdominal computed tomography (CT) examinations, in which no abdominal organ injuries were found.\n\nBODY.DISCUSSION:\nStudies have shown that the increased circulating blood flow in CPR is closely related to the success rate of resuscitation [2]. Animal studies in a porcine model of CA confirmed that the AutoPulse generated better blood circulation than manual chest compression [7, 8]. Similar conclusions were obtained in a number of human studies [9, 15, 16]. A retrospective study found that the ROSC rate of patients with CA was significantly higher in those treated with the AutoPulse than in those treated with manual CPR [10]. A recent multi-center prospective study compared the effects of the AutoPulse CPR versus manual CPR, which found that at one of the medical centers, the survival rate of patients in the AutoPulse group declined from 19.6% to 4% after the implementation of an updated experimental program with delayed use of the AutoPulse (p = 0.024); at the other medical centers, the 4-h survival rate at the primary endpoint was significantly higher in the AutoPulse CPR group than in the Manual CPR group (p = 0.008) [17]. The earlier studies confirm that the AutoPulse plays a positive role in CPR and that stable effective blood circulation is a key factor of CPR success. Our study found that the 1-h ROSB rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group. Basic research has shown that the respiratory center is located in the medulla oblongata and that even a few minutes of ischemia and hypoxia injury can cause irreversible brain damage. Thus, the higher ROSB rate indicates that patients treated with the AutoPulse achieved better cerebral blood supply and that the AutoPulse is superior to manual CPR for improving cerebral blood flow. Additionally, we found that the etCO2 level was significantly higher in the AutoPulse CPR group compared to the Manual CPR group. Under normal physiological conditions, the etCO2 level is determined by CO2 production, pulmonary alveoli ventilation capability, and pulmonary blood flow. When CA occurs, CO2 is continuously generated in vivo. The major influencing factors of CO2 discharge into the lung are CO2 transmission speed and volume from the peripheral tissues (CO2-generating site) to the lung, which depend on the pulmonary blood flow. Thus, if the ventilation remains constant, then the etCO2 level changes reflect cardiac output changes as well as the effect of CPR. In recent years, several studies have reported that etCO2 can be used as a noninvasive monitoring indicator for CPR that is related to prognosis [18, 19]. The 2010 AHA CPR guidelines clarify that during CPR of patients with CA, etCO2 can be used as a physiological parameter to indicate the cardiac output and myocardial perfusion, which helps to optimize the quality of chest compression. Together our results indicate that the AutoPulse is superior to manual CPR for improving cardiac blood flow. Thus, we consider that better cardiac and cerebral blood circulation achieved by mechanical chest compression compared to manual chest compression is the main reason for the high success rate of CPR with the use of AutoPulse. Additionally, uninterrupted chest compression in a patient by an AutoPulse even during delivery may be another important factor contributing to the high ROSC rate [1–3]. The importance of continuous chest compression was emphasized by the 2010 AHA CPR guidelines. A recent study showed that in the helicopter emergency medical service, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group, mainly because the former strategy achieved sustained chest compression during transportation [20]. A simulation study on a human body suggested that compared to manual chest compression, the AutoPulse achieves more sustained and stable chest compression during transportation [21]. From the occurrence of CA to emergency room admission, patients need to be carried for transportation at least twice. In this process, manual chest compression will inevitably be interrupted, whereas the AutoPulse can avoid this problem. Although AutoPulse preparation requires time (average 52 s in our study), it may bring more benefit by continuous chest compression during transportation than the potential risks that arise from CPR interruption during device preparation. Cerebral resuscitation remains a major problem in medical science [22, 23]. A large-scale clinical trial showed that only 27% of adults achieved good recovery of neurological function after CPR [24]. Therefore, the ability of the AutoPulse in improving cerebral resuscitation and neurological function recovery should be of concern. Despite the above statement, patients seem to have better cerebral blood supply in the AutoPulse CPR group than in the Manual CPR group. Animal experiments also demonstrated that compared to manual CPR, AutoPulse-assisted CPR more significantly improved the neurological prognosis in a porcine model of CA. However, it is disappointing that we observed no statistically significant difference in neurological prognosis between the AutoPulse CPR and Manual CPR groups. In terms of complications, there is evidence that compared to manual CPR [25], AutoPulse-assisted CPR increases the risk of organ damage [26–28]. In particular, Truhlar [29] proposed that caution should be taken in the use of AutoPulse in thrombolysis to avoid hemorrhage in the abdominal cavity. Because none of the patients' families consented to an autopsy, we have no detailed data for comparative analysis of relevant organ damage between the two groups. However, bedside X-ray films were taken in all patients upon admission to the emergency room, showing that the incidence of rib fractures was slightly lower in the AutoPulse CPR group than in the Manual CPR group. Thereafter, 6 patients were subjected to abdominal CT examination as required by their condition, and none had abdominal organ damage. Because not all patients underwent the above checks, we could not conclude that the AutoPulse is safer than manual CPR in terms of rib fracture and abdominal organ injury or that there is no difference between the risks of organ damage between the AutoPulse and manual CPR. It is worth noting that a recent study using a porcine model of ventricular fibrillation indicated that manual chest compression is advantageous compared to the AutoPulse for improving hemodynamic parameters [30]. A retrospective comparative study by Jennings [31] found that compared to conventional CPR, AutoPulse-assisted CPR improves the hospital admission rate but reduces the hospital discharge rate of patients with OHCA. Whether the AutoPulse can improve the prognosis of patients with OHCA remains controversial. Considering previous studies [12, 13] and the characteristics of the emergency medicine in the Shanghai area, we used a prospective, randomized, controlled method in this study. On the basis of the previous studies [11, 12] and the characteristics of emergency medicine, ROSC was chosen as the primary outcome measure, and 24-hour survival, discharge rate and discharge neurologic prognosis were chosen for secondary outcome measures. The parameters of etCO2, defibrillation, use of epinephrine, etc, were also analyzed in the results. Due to the uncertain time and places of cardiac arrest onset, it was difficult to achieve complete randomization for the research. Hallstrom et al. [12] assigned the patients to two groups randomly according to the emergency medical services (EMS) station with or without the AutoPulse. However, it was impossible for us to follow because our study was performed in a single emergency medicine center. So we finally equipped half of the ambulances with an AutoPulse randomly by using a computer. It may be the most relevant method to achieve randomization as far as possible. This prospective study has brought new findings at both the primary and secondary endpoints of clinical events. Data comparison showed that the ROSC, 24-h survival, and hospital discharge rates of patients with OHCA were significantly higher in the AutoPulse CPR group than in the Manual CPR group. However, in the final neurological prognosis, there was no statistically significant advantage in patients with OHCA within the AutoPulse CPR group. Our work represents the first prospective controlled study regarding the clinical use of the new chest compression device AutoPulse for CPR of patients with OHCA in China. The results are of guiding value for application of this device worldwide, especially in China's Emergency System. The major limitations of our study are as follows: 1) the small sample size may lead to a biased conclusion; and 2) it is difficult to achieve complete randomization based on the grouping method for the ambulance visiting mode because of the particularity of pre-hospital emergency treatment. Due to our present finding that the AutoPulse can improve the ROSC and 24-h survival rates of patients with OHCA, a multi-center prospective study has been designed for in-depth research in China. In conclusion, in Shanghai's Emergency Medical System in the northern district, AutoPulse CPR is superior to manual CPR for increasing resuscitation success and survival rates in patients with OHCA. This device also improves cardiac and cerebral blood circulation in patients but has no significant positive effect on their neurological prognosis. Thus, we conclude that the AutoPulse is worthy of clinical use but that its application prospects require further comprehensive evaluation.\n\n**Question:** Compared to manual chest compression for cardiopulmonary resuscitation (CPR) of out-of-hospital cardiac arrest (OHCA) what was the result of AutoPulse automated chest compression device on The hospital discharge rate of the patients with OHCA?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
492
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Interferon-gamma in the first-line therapy of ovarian cancer: a randomized phase III trial\n\n ABSTRACT:\nIntraperitoneal treatment with interferon-γ (IFN-γ) has been shown to achieve surgically documented responses in the second-line therapy of ovarian cancer. To assess its efficacy in the first-line therapy, we conducted a randomized controlled trial with 148 patients who had undergone primary surgery for FIGO stage Ic–IIIc ovarian cancer. In the control arm women received 100 mg m−2cisplatin and 600 mg m−2cyclophosphamide, the experimental arm included the above regimen with IFN-γ 0.1 mg subcutaneously on days 1, 3, 5, 15, 17 and 19 of each 28-day cycle. Progression-free survival at 3 years was improved from 38% in controls to 51% in the treatment group corresponding to median times to progression of 17 and 48 months (P = 0.031, relative risk of progression 0.48, confidence interval 0.28–0.82). Three-year overall survival was 58% and 74% accordingly (n.s., median not yet reached). Complete clinical responses were observed in 68% with IFN-γ versus 56% in controls (n.s.). Toxicity was comparable in both groups except for a mild flu-like syndrome, experienced by most patients after administration of IFN-γ. Thus, with acceptable toxicity, the inclusion of IFN-γ in the first-line chemotherapy of ovarian cancer yielded a benefit in prolonging progression-free survival. © 2000 Cancer Research Campaign\n\n**Question:** Compared to Cisplatin and Cyclophosphamide what was the result of Cisplatin and Cyclophosphamide, plus subcutaneous IFN-γ on Full clinical responses?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
493
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Interferon-gamma in the first-line therapy of ovarian cancer: a randomized phase III trial\n\n ABSTRACT:\nIntraperitoneal treatment with interferon-γ (IFN-γ) has been shown to achieve surgically documented responses in the second-line therapy of ovarian cancer. To assess its efficacy in the first-line therapy, we conducted a randomized controlled trial with 148 patients who had undergone primary surgery for FIGO stage Ic–IIIc ovarian cancer. In the control arm women received 100 mg m−2cisplatin and 600 mg m−2cyclophosphamide, the experimental arm included the above regimen with IFN-γ 0.1 mg subcutaneously on days 1, 3, 5, 15, 17 and 19 of each 28-day cycle. Progression-free survival at 3 years was improved from 38% in controls to 51% in the treatment group corresponding to median times to progression of 17 and 48 months (P = 0.031, relative risk of progression 0.48, confidence interval 0.28–0.82). Three-year overall survival was 58% and 74% accordingly (n.s., median not yet reached). Complete clinical responses were observed in 68% with IFN-γ versus 56% in controls (n.s.). Toxicity was comparable in both groups except for a mild flu-like syndrome, experienced by most patients after administration of IFN-γ. Thus, with acceptable toxicity, the inclusion of IFN-γ in the first-line chemotherapy of ovarian cancer yielded a benefit in prolonging progression-free survival. © 2000 Cancer Research Campaign\n\n**Question:** Compared to Cisplatin and Cyclophosphamide what was the result of Cisplatin and Cyclophosphamide, plus subcutaneous IFN-γ on Toxicity?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
497
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Double-Blind Placebo-Controlled Treatment Trial of \n\n ABSTRACT:\n\n\t\t\tObjective: The purpose of this study was to determine if treatment of pregnant women with Chlamydia trachomatis infection would lower the incidence of preterm delivery and/or low birth weight.\n\t\t \n\t\t\tMethods: Pregnant women between the 23rd and 29th weeks of gestation were randomized in double-blind fashion to receive either erythromycin 333 mg three times daily or an identical placebo. The trial continued until the end of the 35th week of gestation.\n\t\t \n\t\t\tResults: When the results were examined without regard to study site, erythromycin had little impact on reducing low birth weight (8% vs. 11%, P = 0.4) or preterm delivery (13% vs. 15%, P = 0.7). At the sites with high persistence of C. trachomatis in the placebo-treated women, low birth weight infants occurred in 9 (8%) of 114 erythromycin-treated and 18 (17%) of 105 placebo-treated women (P = 0.04) and delivery <37 weeks occurred in 15 (13%) of 115 erythromycin-treated and 18 (17%) of 105 placebo-treated women (P = 0.4).\n\t\t \n\t\t\tConclusions: The results of this trial suggest that the risk of low birth weight can be decreased by giving erythromycin to some women with C. trachomatis. Due to the high clearance rate of C. trachomatis in the placebo group, these data do not provide unequivocal evidence that erythromycin use in all C. trachomatis-infected women prevents low birth weight.\n\n**Question:** Compared to Placebo what was the result of Erythromycin on Low birth weight ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
505
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Oral mucositis and selective elimination of oral flora in head and neck cancer patients receiving radiotherapy: a double-blind randomised clinical trial\n\n ABSTRACT:\n\n\nBODY:\nRadiotherapy in head and neck cancer patients can induce oral mucositis, which is an acute inflammation of the oral mucosa. Until now no effective intervention has been developed to prevent oral mucositis in radiotherapy (Sutherland and Browman, 2001). This prevention is even more relevant now because altered fractionation schedules for the treatment of head and neck malignancies induce more severe mucositis (Kaanders et al, 1999). All patients receiving radiotherapy in the head and neck region develop oral mucositis to some extent, depending on radiation schedule, radiation field, radiation volume and cumulative dose (Miralbell et al, 1999). Clinically, mucositis appears in a conventional radiation scheme after a cumulative radiation dose of 10–20 Gy as a white discoloration of the mucosa because of hyperkeratinisation. The next stage is a deepening erythema followed by the development of pseudomembranes and ulcerations. Severe mucositis, appearing as pseudomembranes, will develop at the end of the third week of radiation, after about 30 Gy (Baker, 1982; Spijkervet et al, 1989b). Prevention of severe mucositis is important because mucositis affects the patient's feeding status, physical and mental well-being and it can influence the course of radiotherapy (Sutherland and Browman, 2001). Further oral pain because of mucositis has a serious impact on the quality of life of patients (Miralbell et al, 1999). Several mechanisms are supposed to play a role in the development of mucositis: changes at the cellular level of the basal cell layer, inflammatory process in the epithelium and influence of bacteria on mucosal surface. Changed oral flora, colonising the oral mucosa, may aggravate the mucosa reaction because of radiation (Van Saene and Martin, 1990). The carriage and colonisation of aerobic Gram-negative bacilli are thought to play a role in the pathogenesis of irradiation mucositis (Bernhoft and Skaug, 1985). A hypothesis has been proposed on the development of mucositis in four consecutive phases, in which the ulcerative/bacterial phase is thought to play a role in the development of fibrous pseudomembranes of the oral mucosa (Sonis, 1998). A pilot study in 15 patients reported the protective effect of an antibiotic lozenge for selective elimination of the oral flora (Spijkervet et al, 1990). Less severe mucositis and a less mean mucositis score compared to a historical control group was observed. None of the PTA-(polymyxin E 2 mg, tobramycin 1.8 mg and amphotericin B 10 mg) treated patients needed nasogastric tube feeding. In a cohort study including 36 patients, it was found that PTA lozenges may reduce irradiation mucositis (Kaanders et al, 1995). In contrast, randomised studies reported conflicting effects on mucositis by selective oral flora elimination (Symonds et al, 1996; Okuno et al, 1997; Wijers et al, 2001). The aim of this study was to evaluate in a randomised, double-blind, placebo-controlled trial the effects of selective oral flora elimination on the development of irradiation-induced oral mucositis, feeding, weight loss and colonisation of aerobic Gram-negative bacilli and yeast.\n\nBODY.PATIENTS AND METHODS.PROTOCOL:\nPatients with a malignant tumour in the head and neck regions to be treated with primary curative or postoperative radiotherapy were eligible for this study. Inclusion criteria for the study were: external bilateral irradiation via parallel-opposed portals by a linear accelerator (4–6 MeV), fractionation of 2 Gy daily, five times a week, with a prescribed dose of at least 50 Gy and at least 50% of the oral mucosa in the field of radiation. The dose specification was in line with ICRU 50 recommendations (Anonymous, 1993). Criteria for exclusion were: (1) an oral mucosa defect other than related to tumour surgery; (2) need for an obturator or resection prosthesis and; (3) treatment with antibiotics for an oral infection the last 2 weeks before the start of irradiation. As a standard procedure all patients were evaluated before radiation treatment for potential risk factors for oral complications by means of a thorough oral and dental evaluation, including a radiographic examination. All potential risk factors were eliminated appropriately before the start of radiotherapy. The supportive oral care regimen consisted of a daily protocol of cleansing the oral cavity by means of spraying with saline by the dental hygienist, and mouth rinsing by the patients with a salt–baking soda solution at least eight times a day to remove sticky saliva and debris. Dentate patients applied a neutral fluoride gel every second day with custom-made trays and edentulous patients were not allowed to wear their dentures during the course of radiotherapy (Jansma et al, 1992). The Medical Ethical Committee approved the study and all eligible patients gave written informed consent.\n\nBODY.PATIENTS AND METHODS.ASSIGNMENT:\nThe eligible patients were randomised to receive active lozenges of 1 g containing polymyxin E 2 mg, tobramycin 1.8 mg and amphotericin B 10 mg (PTA) or placebo lozenges. The ingredients of the placebo lozenge were identical with the PTA lozenge except the active drugs. The colour, taste and form of the PTA and placebo lozenges were identical as well. Randomisation was performed by the hospital pharmacist according to a computer-generated, randomised allocation schedule. Patients, clinicians, dental hygienists and microbiologists were blind for who was taking antibiotics. The patients used a PTA or placebo lozenge four times daily starting the first day of irradiation during the total radiation period.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS:\nThe study period included only the first 5 weeks of radiation because of the wide range of field changes above 50 Gy of radiation. During the study period mucositis, feeding and body weight scores were performed at the start of radiotherapy and twice weekly (Monday–Thursday). The assessments were performed by an assigned dental hygienist. For each patient a mean weekly score was calculated on the basis of these two scores. These mean scores were used for further statistical analyses. Twice weekly (Monday–Thursday) and two times before the start of radiation oral washings were obtained to examine the oral flora for Gram-negative bacilli, Candida species, viridans streptococci, Enterococci, Staphylococcus aureus and coagulase-negative Staphylococci.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.MUCOSITIS:\nThe mean mucositis was scored by using qualitative and quantitative parameters (Spijkervet et al, 1989b). Four different local signs of mucositis (k) might be distinguished: 1=white discoloration; 2=erythema; 3=formation of pseudomembranes; 4=ulceration. Mucositis of the oral cavity was determined for maximally eight distinguishable irradiated areas of the mouth: buccal mucosa (left and right), soft and hard palates, dorsum and border of the tongue (left and right), and the floor of the mouth. The degree of mucositis of each area was scored according to the local signs of mucositis. The length (E) of the local sign of mucositis was measured: 1=⩽1 cm; 2=1–2 cm; 3=2–4 cm; 4=⩾4 cm. The degree of mucositis was defined as the product of the values k and E. The mucositis score was defined as the mean of the scores assigned to the irradiated areas. The mucositis was also scored according to the WHO score (grade 0=normal, no mucositis; grade 1=soreness and erythema; grade 2=erythema, ulcers, can eat solids; grade 3=ulcers, requires liquid diet only; grade 4=alimentation not possible) (Anonymous, 1979).\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.FEEDING:\nThe quality of feeding was scored (0=normal, no changes; 1=symptoms without medication; 2=symptoms with medication; 3=liquid diet only; 4=nasogastric tube feeding) and body weight was determined. Afterwards the changes in weight were scored.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.MICROBIOLOGICAL METHODS:\nTo acquire an oral washing, patients gargled and rinsed their mouth with 10 ml sterile saline for 30 s, and spit it into a sterile vial. One millilitre of the sample was diluted in 9 ml of Brain Heart infusion (BHI) (Oxoid, Basingstoke, England) and this suspension was serially diluted in BHI. The suspensions were than plated out onto 5% sheep blood agar, McConkey-3 agar (Oxoid) and Yeast morphology agar (Merck, Darmstadt, Germany). The agar plates and BHI broth cultures tubes were incubated overnight aerobically at 37°C. If an agar plate did not show growth and the corresponding BHI broth culture of the dilution series did show turbidity, then this suspension was plated again onto the agars mentioned above. With this enrichment step even low numbers of Candida species and Gram-negative bacilli could be detected (Spijkervet et al, 1989a). By reading and counting the plates after incubation the viable numbers of microorganisms per millilitre was estimated. The identification was performed by standard microbiological techniques.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.DEFINITIONS:\nCarriage of a particular microorganism was defined as the condition in which a patient showed a minimum of two consecutive oral washings positive for that organism. Colonisation index of the oral cavity was defined as the sum of logarithms of the concentrations of a particular microorganism isolated from 1 ml of oral-washing specimens divided by the number of oral washings.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nSample size calculation of this study was based on the study by Spijkervet (Spijkervet et al, 1990). A two-sided α of 5% and a power of 80% were used. Additionally, a 50% reduction of mucositis in the PTA group was determined as clinically relevant with a normal incidence of mucositis of 80%. Based on these assumptions, 27 patients in each group would be sufficient. Intention-to-treat analysis was performed. The difference of dropouts between both groups was analysed using Fisher's exact test. The results were analysed, with respect to mean mucositis, the loss of weight, and colonisation numbers for five different microorganisms (t-test for independent samples) and the WHO mucositis score and feeding (Mann–Whitney U test). Two-sided tests, performed at the 5% level of significance, were used.\n\nBODY.RESULTS:\nPatient characteristics are shown in Table 1\n Table 1Patient characteristicsPatients characteristicsPlaceboPTAP-valueaAge mean±sd (years)54 (10.8)56 (12.5)0.36Gender: male/female (n)22/1024/90.79 Tumor site Oral cavity (n)17230.41 Oropharynx (n)108 Hypopharynx (n)11 Unknown primary(n)41 Histology Squamous (n)31321.0 Other (n)11 T-stage T1 (n)450.25 T2 (n)65 T3 (n)88 T4 (n)914 bTx (n)51 N-stage N0 (n)10140.12 N1 (n)610 N2a (n)21 N2b (n)93 N2c (n)13 N3 (n)42 Surgery Yes (n)21270.17 No (n)116 Dentures Yes (n)20250.29 No (n)128 All differences between the groups were analysed using χ2 test exceptain which t-test was used.bSix patients with an unknown primary tumour.. From January 1994 to February 1997, 65 patients were included, 33 patients received PTA lozenges and 32 the placebo lozenges. Out of the 65 included patients, 58 patients were evaluable for the total evaluation period of 5 weeks. Seven patients (11%) dropped out earlier from the study, five of the PTA group (15%) and two of the placebo group (6%). The difference of dropouts between both groups was not significant. One patient (PTA) developed a skin reaction, unlikely caused by the PTA lozenges, and one patient (placebo) could not suck the lozenges because of the tumour surgery of his tongue. The other five dropped out for reasons not related to one of the lozenges. Of the seven dropouts, one patient stopped after 1 week, two patients after 2 weeks, one patient after 3 weeks and three patients after 4 weeks radiation.\n\nBODY.RESULTS.MUCOSITIS:\nThe mean mucositis was the same in the PTA group and the placebo group during the study period (P>0.2) (Figure 1Figure 1The mean mucositis score (±s.d.) for the PTA group (▵) and the placebo group (□).). During the 5-week observation period 89% of the patients in the PTA group developed pseudomembranes and in the placebo group 94%. The mucositis according to the WHO score did not differ throughout the study period between both groups (P>0.5). In the PTA group, 80% of the patients developed grades 3 and 4 mucositis according to the WHO score, and in the placebo group 90%. The appearance of pseudomembranes was for both groups at a similar radiation status; after 30 Gy of radiation.\n\nBODY.RESULTS.FEEDING:\nSix patients (19%) in the placebo group (n=32) and two patients (6%) in the PTA group (n=33) needed nasogastric tube feeding during the evaluation period (P=0.08).\n\nBODY.RESULTS.BODY WEIGHT:\nThe mean weight loss after 5 weeks of radiation was less in the PTA group by 1.3 kg (s.d. 3.0) than in the placebo group 2.8 kg (s.d. 2.9) (P=0.05).\n\nBODY.RESULTS.MICROORGANISMS:\nFor viridans streptococci, Enterococci, Staphylococcus aureus and coagulase-negative Staphylococci, a similar pattern of carriage and colonisation was found in both groups. The colonisation and carriage of Candida species at baseline was equal in both groups (P>0.8). During the first two radiation weeks the colonisation for Candida species showed an increase in the placebo group and a decrease in the PTA group. After 2 weeks, an increase in both groups was found but the difference between the two groups remained significant during the total study period (P<0.05) (Figure 2Figure 2Percentage of patients colonised for Candida species for the PTA group (lines) and the placebo group (solid).). During the first 4 weeks a significant difference was found for the carriage of Candida species (P<0.03). The colonisation and carriage of aerobic Gram-negative bacilli at baseline was equal in both groups (P=0.9). During the radiation period the colonisation in the PTA group was less than in the placebo group, but the difference was only significant in the second week of radiation (P=0.05) (Figure 3Figure 3Percentage of patients colonised for aerobic Gram-negative bacilli for the PTA group (lines) and the placebo group (solid).). During the first 2 weeks the carriage of aerobic Gram-negative bacilli was reduced in the PTA group (P<0.04). In weeks 3–5 the difference was no longer significant. All results are summarised in Table 2\n Table 2Results of the PTA–placebo group for mean mucositis, weight loss, carriage and colonisation index of Candida species and aerobic Gram-negative bacilli 012345 WeekXs.d.Xs.d.Xs.d.Xs.d.Xs.d.Xs.d.Mean mucositis PTA000.50.72.21.74.62.74.52.75.02.3 Placebo000.30.42.62.04.82.34.92.55.22.8 Weight loss PTA00−0.31.00.41.30.62.11.02.71.33.0 Placebo000.30.9a0.61.21.31.72.22.32.82.9a Candida col. Index PTA1.01.30.40.80.50.90.60.90.81.51.01.5 Placebo1.11.31.21.6a1.61.7a1.71.7a2.01.7a1.91.8a Candida carriage PTA (%)48 23.5 17.6 31.8 30.8 36 Placebo (%)52 76.5b 82.4b 68.2b 69.2b 64 Aerobic Gram-negative bac. col. Index PTA0.71.20.40.80.40.90.61.20.51.00.40.9 Placebo0.71.40.81.10.91.1a0.81.00.81.40.81.5 Aerobic Gram-negative bac. Carriage PTA (%)52.4 23.1 25 42.9 54.5 40 Placebo (%)47.6 76.9b 75b 57.1 45.5 60 aRepresents a significant difference between the PTA and placebo group using an independent sample t-test.bRepresents a significant difference between the PTA and placebo group using χ2 test..\n\nBODY.DISCUSSION:\nIn this study, no effect of selective oral flora elimination on mucositis was observed. The development of mucositis follows the same pattern as reported in an earlier cohort study (Spijkervet et al, 1989c). According to the WHO score, 80% of the patients in the PTA group and 90% in the placebo group developed mucositis grades 3 and 4. This severity of mucositis is in accordance with the outcomes of Okuno et al (1997). In other studies, different outcomes are reported. A significant reduction of mucositis in the PTA group was found by two groups (Spijkervet et al, 1990; Kaanders et al, 1995). Both studies are nonrandomised clinical trials and the PTA group is compared with a historical control group. A reduction in mucositis distribution and affected area, dysphagia and weight loss in the PTA group is reported by Symonds et al. (1996). From a total of 221 patients in that study, 98 (44%) patients had a larynx carcinoma (PTA=57, placebo=41). Of these patients, the radiation field included only a minor part of the oral mucosa, in which mucositis could develop. In the study of Okuno et al only a subjective patient-reported amelioration of mucositis was reported but no reduction was found in clinically observed mucositis (Okuno et al, 1997). The PTA group (n=54) in that study consisted of an unblinded (n=29) and a blinded (n=26) group. Only in the unblinded PTA group the mean mucositis, reported by the patients was lower than the placebo group. Recently, it was shown in a randomised study including 77 patients that selective oral flora elimination does not reduce radiation mucositis (Wijers et al, 2001). A problem in that study is the short evaluation time of only the first 3 weeks of radiotherapy. Whereas normally development of severe mucositis starts after 3 weeks of radiation (Kaanders et al, 1999). A complicating factor in comparing outcomes from different studies is the assessment method of mucositis. All studies used different scoring methods. Therefore, two scoring methods were used in the current study. The WHO score is a widely accepted method, but this score is a combination of local mucositis signs and general complaints (Anonymous, 1979). The other scoring method in the current study is based only on mucosal signs of mucositis (Spijkervet et al, 1989b). It therefore provides a more precise estimation of the mucositis development at the mucosal level. It further makes a comparison possible with outcomes of earlier publications (Spijkervet et al, 1989b; Parulekar et al, 1998). For future studies, we recommend the use of the OMAS-score from the mucositis study group because this scoring method is a reliable, well-validated and widely accepted method (Sonis et al, 1999). This scoring method was published later than the start of the current study and was therefore not used as a scoring method in this study. In the current study, patients who received PTA lozenges had less weight loss than patients receiving placebo lozenges, assuming a better feeding status of the PTA-group patients (mean difference 1.5 kg). Owing to the minimal effect of PTA on the mucositis level, we found the feeding outcome of minor clinical relevance. In our study, carriage and colonisation of aerobic Gram-negative bacilli and Candida species decreased in the PTA group but was not totally eradicated. These findings are in line with the findings of other studies (Spijkervet et al, 1990; Kaanders et al, 1995; Symonds et al, 1996; Wijers et al, 2001). Based on these findings and the development and severity of mucositis it can be concluded that the presence of Candida species and aerobic Gram-negative bacilli has no influence on the development of radiation-induced mucositis. The increase of the carriage and colonisation of Candida species and aerobic Gram-negative bacilli after 3 weeks of radiation may be explained by the development of xerostomia, which makes dissolving of the lozenges more difficult. Wijers et al tried to overcome this problem by using a paste instead of a lozenge. However, the paste appeared to be an unsuccessful form of application because already after randomisation 32% of the patients refused further participation and 77% of the patients dropped out after four study weeks because of bad taste and unpleasant sensation of the paste texture in the mouth (Wijers et al, 2001). In conclusion, PTA lozenges have a positive effect on the quality of feeding and the amount of weight loss but cannot prevent severe mucositis. The presence of Candida species and aerobic Gram-negative bacilli has no effect on the development and severity of radiation-induced mucositis. Based on our findings of this randomised clinical trial, we do not recommend this type of supportive care for the reduction or prevention of radiation mucositis.\n\n**Question:** Compared to Placebo lozenge what was the result of PTA lozenge on Weight loss?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
507
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Oral mucositis and selective elimination of oral flora in head and neck cancer patients receiving radiotherapy: a double-blind randomised clinical trial\n\n ABSTRACT:\n\n\nBODY:\nRadiotherapy in head and neck cancer patients can induce oral mucositis, which is an acute inflammation of the oral mucosa. Until now no effective intervention has been developed to prevent oral mucositis in radiotherapy (Sutherland and Browman, 2001). This prevention is even more relevant now because altered fractionation schedules for the treatment of head and neck malignancies induce more severe mucositis (Kaanders et al, 1999). All patients receiving radiotherapy in the head and neck region develop oral mucositis to some extent, depending on radiation schedule, radiation field, radiation volume and cumulative dose (Miralbell et al, 1999). Clinically, mucositis appears in a conventional radiation scheme after a cumulative radiation dose of 10–20 Gy as a white discoloration of the mucosa because of hyperkeratinisation. The next stage is a deepening erythema followed by the development of pseudomembranes and ulcerations. Severe mucositis, appearing as pseudomembranes, will develop at the end of the third week of radiation, after about 30 Gy (Baker, 1982; Spijkervet et al, 1989b). Prevention of severe mucositis is important because mucositis affects the patient's feeding status, physical and mental well-being and it can influence the course of radiotherapy (Sutherland and Browman, 2001). Further oral pain because of mucositis has a serious impact on the quality of life of patients (Miralbell et al, 1999). Several mechanisms are supposed to play a role in the development of mucositis: changes at the cellular level of the basal cell layer, inflammatory process in the epithelium and influence of bacteria on mucosal surface. Changed oral flora, colonising the oral mucosa, may aggravate the mucosa reaction because of radiation (Van Saene and Martin, 1990). The carriage and colonisation of aerobic Gram-negative bacilli are thought to play a role in the pathogenesis of irradiation mucositis (Bernhoft and Skaug, 1985). A hypothesis has been proposed on the development of mucositis in four consecutive phases, in which the ulcerative/bacterial phase is thought to play a role in the development of fibrous pseudomembranes of the oral mucosa (Sonis, 1998). A pilot study in 15 patients reported the protective effect of an antibiotic lozenge for selective elimination of the oral flora (Spijkervet et al, 1990). Less severe mucositis and a less mean mucositis score compared to a historical control group was observed. None of the PTA-(polymyxin E 2 mg, tobramycin 1.8 mg and amphotericin B 10 mg) treated patients needed nasogastric tube feeding. In a cohort study including 36 patients, it was found that PTA lozenges may reduce irradiation mucositis (Kaanders et al, 1995). In contrast, randomised studies reported conflicting effects on mucositis by selective oral flora elimination (Symonds et al, 1996; Okuno et al, 1997; Wijers et al, 2001). The aim of this study was to evaluate in a randomised, double-blind, placebo-controlled trial the effects of selective oral flora elimination on the development of irradiation-induced oral mucositis, feeding, weight loss and colonisation of aerobic Gram-negative bacilli and yeast.\n\nBODY.PATIENTS AND METHODS.PROTOCOL:\nPatients with a malignant tumour in the head and neck regions to be treated with primary curative or postoperative radiotherapy were eligible for this study. Inclusion criteria for the study were: external bilateral irradiation via parallel-opposed portals by a linear accelerator (4–6 MeV), fractionation of 2 Gy daily, five times a week, with a prescribed dose of at least 50 Gy and at least 50% of the oral mucosa in the field of radiation. The dose specification was in line with ICRU 50 recommendations (Anonymous, 1993). Criteria for exclusion were: (1) an oral mucosa defect other than related to tumour surgery; (2) need for an obturator or resection prosthesis and; (3) treatment with antibiotics for an oral infection the last 2 weeks before the start of irradiation. As a standard procedure all patients were evaluated before radiation treatment for potential risk factors for oral complications by means of a thorough oral and dental evaluation, including a radiographic examination. All potential risk factors were eliminated appropriately before the start of radiotherapy. The supportive oral care regimen consisted of a daily protocol of cleansing the oral cavity by means of spraying with saline by the dental hygienist, and mouth rinsing by the patients with a salt–baking soda solution at least eight times a day to remove sticky saliva and debris. Dentate patients applied a neutral fluoride gel every second day with custom-made trays and edentulous patients were not allowed to wear their dentures during the course of radiotherapy (Jansma et al, 1992). The Medical Ethical Committee approved the study and all eligible patients gave written informed consent.\n\nBODY.PATIENTS AND METHODS.ASSIGNMENT:\nThe eligible patients were randomised to receive active lozenges of 1 g containing polymyxin E 2 mg, tobramycin 1.8 mg and amphotericin B 10 mg (PTA) or placebo lozenges. The ingredients of the placebo lozenge were identical with the PTA lozenge except the active drugs. The colour, taste and form of the PTA and placebo lozenges were identical as well. Randomisation was performed by the hospital pharmacist according to a computer-generated, randomised allocation schedule. Patients, clinicians, dental hygienists and microbiologists were blind for who was taking antibiotics. The patients used a PTA or placebo lozenge four times daily starting the first day of irradiation during the total radiation period.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS:\nThe study period included only the first 5 weeks of radiation because of the wide range of field changes above 50 Gy of radiation. During the study period mucositis, feeding and body weight scores were performed at the start of radiotherapy and twice weekly (Monday–Thursday). The assessments were performed by an assigned dental hygienist. For each patient a mean weekly score was calculated on the basis of these two scores. These mean scores were used for further statistical analyses. Twice weekly (Monday–Thursday) and two times before the start of radiation oral washings were obtained to examine the oral flora for Gram-negative bacilli, Candida species, viridans streptococci, Enterococci, Staphylococcus aureus and coagulase-negative Staphylococci.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.MUCOSITIS:\nThe mean mucositis was scored by using qualitative and quantitative parameters (Spijkervet et al, 1989b). Four different local signs of mucositis (k) might be distinguished: 1=white discoloration; 2=erythema; 3=formation of pseudomembranes; 4=ulceration. Mucositis of the oral cavity was determined for maximally eight distinguishable irradiated areas of the mouth: buccal mucosa (left and right), soft and hard palates, dorsum and border of the tongue (left and right), and the floor of the mouth. The degree of mucositis of each area was scored according to the local signs of mucositis. The length (E) of the local sign of mucositis was measured: 1=⩽1 cm; 2=1–2 cm; 3=2–4 cm; 4=⩾4 cm. The degree of mucositis was defined as the product of the values k and E. The mucositis score was defined as the mean of the scores assigned to the irradiated areas. The mucositis was also scored according to the WHO score (grade 0=normal, no mucositis; grade 1=soreness and erythema; grade 2=erythema, ulcers, can eat solids; grade 3=ulcers, requires liquid diet only; grade 4=alimentation not possible) (Anonymous, 1979).\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.FEEDING:\nThe quality of feeding was scored (0=normal, no changes; 1=symptoms without medication; 2=symptoms with medication; 3=liquid diet only; 4=nasogastric tube feeding) and body weight was determined. Afterwards the changes in weight were scored.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.MICROBIOLOGICAL METHODS:\nTo acquire an oral washing, patients gargled and rinsed their mouth with 10 ml sterile saline for 30 s, and spit it into a sterile vial. One millilitre of the sample was diluted in 9 ml of Brain Heart infusion (BHI) (Oxoid, Basingstoke, England) and this suspension was serially diluted in BHI. The suspensions were than plated out onto 5% sheep blood agar, McConkey-3 agar (Oxoid) and Yeast morphology agar (Merck, Darmstadt, Germany). The agar plates and BHI broth cultures tubes were incubated overnight aerobically at 37°C. If an agar plate did not show growth and the corresponding BHI broth culture of the dilution series did show turbidity, then this suspension was plated again onto the agars mentioned above. With this enrichment step even low numbers of Candida species and Gram-negative bacilli could be detected (Spijkervet et al, 1989a). By reading and counting the plates after incubation the viable numbers of microorganisms per millilitre was estimated. The identification was performed by standard microbiological techniques.\n\nBODY.PATIENTS AND METHODS.ASSESSMENTS.DEFINITIONS:\nCarriage of a particular microorganism was defined as the condition in which a patient showed a minimum of two consecutive oral washings positive for that organism. Colonisation index of the oral cavity was defined as the sum of logarithms of the concentrations of a particular microorganism isolated from 1 ml of oral-washing specimens divided by the number of oral washings.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nSample size calculation of this study was based on the study by Spijkervet (Spijkervet et al, 1990). A two-sided α of 5% and a power of 80% were used. Additionally, a 50% reduction of mucositis in the PTA group was determined as clinically relevant with a normal incidence of mucositis of 80%. Based on these assumptions, 27 patients in each group would be sufficient. Intention-to-treat analysis was performed. The difference of dropouts between both groups was analysed using Fisher's exact test. The results were analysed, with respect to mean mucositis, the loss of weight, and colonisation numbers for five different microorganisms (t-test for independent samples) and the WHO mucositis score and feeding (Mann–Whitney U test). Two-sided tests, performed at the 5% level of significance, were used.\n\nBODY.RESULTS:\nPatient characteristics are shown in Table 1\n Table 1Patient characteristicsPatients characteristicsPlaceboPTAP-valueaAge mean±sd (years)54 (10.8)56 (12.5)0.36Gender: male/female (n)22/1024/90.79 Tumor site Oral cavity (n)17230.41 Oropharynx (n)108 Hypopharynx (n)11 Unknown primary(n)41 Histology Squamous (n)31321.0 Other (n)11 T-stage T1 (n)450.25 T2 (n)65 T3 (n)88 T4 (n)914 bTx (n)51 N-stage N0 (n)10140.12 N1 (n)610 N2a (n)21 N2b (n)93 N2c (n)13 N3 (n)42 Surgery Yes (n)21270.17 No (n)116 Dentures Yes (n)20250.29 No (n)128 All differences between the groups were analysed using χ2 test exceptain which t-test was used.bSix patients with an unknown primary tumour.. From January 1994 to February 1997, 65 patients were included, 33 patients received PTA lozenges and 32 the placebo lozenges. Out of the 65 included patients, 58 patients were evaluable for the total evaluation period of 5 weeks. Seven patients (11%) dropped out earlier from the study, five of the PTA group (15%) and two of the placebo group (6%). The difference of dropouts between both groups was not significant. One patient (PTA) developed a skin reaction, unlikely caused by the PTA lozenges, and one patient (placebo) could not suck the lozenges because of the tumour surgery of his tongue. The other five dropped out for reasons not related to one of the lozenges. Of the seven dropouts, one patient stopped after 1 week, two patients after 2 weeks, one patient after 3 weeks and three patients after 4 weeks radiation.\n\nBODY.RESULTS.MUCOSITIS:\nThe mean mucositis was the same in the PTA group and the placebo group during the study period (P>0.2) (Figure 1Figure 1The mean mucositis score (±s.d.) for the PTA group (▵) and the placebo group (□).). During the 5-week observation period 89% of the patients in the PTA group developed pseudomembranes and in the placebo group 94%. The mucositis according to the WHO score did not differ throughout the study period between both groups (P>0.5). In the PTA group, 80% of the patients developed grades 3 and 4 mucositis according to the WHO score, and in the placebo group 90%. The appearance of pseudomembranes was for both groups at a similar radiation status; after 30 Gy of radiation.\n\nBODY.RESULTS.FEEDING:\nSix patients (19%) in the placebo group (n=32) and two patients (6%) in the PTA group (n=33) needed nasogastric tube feeding during the evaluation period (P=0.08).\n\nBODY.RESULTS.BODY WEIGHT:\nThe mean weight loss after 5 weeks of radiation was less in the PTA group by 1.3 kg (s.d. 3.0) than in the placebo group 2.8 kg (s.d. 2.9) (P=0.05).\n\nBODY.RESULTS.MICROORGANISMS:\nFor viridans streptococci, Enterococci, Staphylococcus aureus and coagulase-negative Staphylococci, a similar pattern of carriage and colonisation was found in both groups. The colonisation and carriage of Candida species at baseline was equal in both groups (P>0.8). During the first two radiation weeks the colonisation for Candida species showed an increase in the placebo group and a decrease in the PTA group. After 2 weeks, an increase in both groups was found but the difference between the two groups remained significant during the total study period (P<0.05) (Figure 2Figure 2Percentage of patients colonised for Candida species for the PTA group (lines) and the placebo group (solid).). During the first 4 weeks a significant difference was found for the carriage of Candida species (P<0.03). The colonisation and carriage of aerobic Gram-negative bacilli at baseline was equal in both groups (P=0.9). During the radiation period the colonisation in the PTA group was less than in the placebo group, but the difference was only significant in the second week of radiation (P=0.05) (Figure 3Figure 3Percentage of patients colonised for aerobic Gram-negative bacilli for the PTA group (lines) and the placebo group (solid).). During the first 2 weeks the carriage of aerobic Gram-negative bacilli was reduced in the PTA group (P<0.04). In weeks 3–5 the difference was no longer significant. All results are summarised in Table 2\n Table 2Results of the PTA–placebo group for mean mucositis, weight loss, carriage and colonisation index of Candida species and aerobic Gram-negative bacilli 012345 WeekXs.d.Xs.d.Xs.d.Xs.d.Xs.d.Xs.d.Mean mucositis PTA000.50.72.21.74.62.74.52.75.02.3 Placebo000.30.42.62.04.82.34.92.55.22.8 Weight loss PTA00−0.31.00.41.30.62.11.02.71.33.0 Placebo000.30.9a0.61.21.31.72.22.32.82.9a Candida col. Index PTA1.01.30.40.80.50.90.60.90.81.51.01.5 Placebo1.11.31.21.6a1.61.7a1.71.7a2.01.7a1.91.8a Candida carriage PTA (%)48 23.5 17.6 31.8 30.8 36 Placebo (%)52 76.5b 82.4b 68.2b 69.2b 64 Aerobic Gram-negative bac. col. Index PTA0.71.20.40.80.40.90.61.20.51.00.40.9 Placebo0.71.40.81.10.91.1a0.81.00.81.40.81.5 Aerobic Gram-negative bac. Carriage PTA (%)52.4 23.1 25 42.9 54.5 40 Placebo (%)47.6 76.9b 75b 57.1 45.5 60 aRepresents a significant difference between the PTA and placebo group using an independent sample t-test.bRepresents a significant difference between the PTA and placebo group using χ2 test..\n\nBODY.DISCUSSION:\nIn this study, no effect of selective oral flora elimination on mucositis was observed. The development of mucositis follows the same pattern as reported in an earlier cohort study (Spijkervet et al, 1989c). According to the WHO score, 80% of the patients in the PTA group and 90% in the placebo group developed mucositis grades 3 and 4. This severity of mucositis is in accordance with the outcomes of Okuno et al (1997). In other studies, different outcomes are reported. A significant reduction of mucositis in the PTA group was found by two groups (Spijkervet et al, 1990; Kaanders et al, 1995). Both studies are nonrandomised clinical trials and the PTA group is compared with a historical control group. A reduction in mucositis distribution and affected area, dysphagia and weight loss in the PTA group is reported by Symonds et al. (1996). From a total of 221 patients in that study, 98 (44%) patients had a larynx carcinoma (PTA=57, placebo=41). Of these patients, the radiation field included only a minor part of the oral mucosa, in which mucositis could develop. In the study of Okuno et al only a subjective patient-reported amelioration of mucositis was reported but no reduction was found in clinically observed mucositis (Okuno et al, 1997). The PTA group (n=54) in that study consisted of an unblinded (n=29) and a blinded (n=26) group. Only in the unblinded PTA group the mean mucositis, reported by the patients was lower than the placebo group. Recently, it was shown in a randomised study including 77 patients that selective oral flora elimination does not reduce radiation mucositis (Wijers et al, 2001). A problem in that study is the short evaluation time of only the first 3 weeks of radiotherapy. Whereas normally development of severe mucositis starts after 3 weeks of radiation (Kaanders et al, 1999). A complicating factor in comparing outcomes from different studies is the assessment method of mucositis. All studies used different scoring methods. Therefore, two scoring methods were used in the current study. The WHO score is a widely accepted method, but this score is a combination of local mucositis signs and general complaints (Anonymous, 1979). The other scoring method in the current study is based only on mucosal signs of mucositis (Spijkervet et al, 1989b). It therefore provides a more precise estimation of the mucositis development at the mucosal level. It further makes a comparison possible with outcomes of earlier publications (Spijkervet et al, 1989b; Parulekar et al, 1998). For future studies, we recommend the use of the OMAS-score from the mucositis study group because this scoring method is a reliable, well-validated and widely accepted method (Sonis et al, 1999). This scoring method was published later than the start of the current study and was therefore not used as a scoring method in this study. In the current study, patients who received PTA lozenges had less weight loss than patients receiving placebo lozenges, assuming a better feeding status of the PTA-group patients (mean difference 1.5 kg). Owing to the minimal effect of PTA on the mucositis level, we found the feeding outcome of minor clinical relevance. In our study, carriage and colonisation of aerobic Gram-negative bacilli and Candida species decreased in the PTA group but was not totally eradicated. These findings are in line with the findings of other studies (Spijkervet et al, 1990; Kaanders et al, 1995; Symonds et al, 1996; Wijers et al, 2001). Based on these findings and the development and severity of mucositis it can be concluded that the presence of Candida species and aerobic Gram-negative bacilli has no influence on the development of radiation-induced mucositis. The increase of the carriage and colonisation of Candida species and aerobic Gram-negative bacilli after 3 weeks of radiation may be explained by the development of xerostomia, which makes dissolving of the lozenges more difficult. Wijers et al tried to overcome this problem by using a paste instead of a lozenge. However, the paste appeared to be an unsuccessful form of application because already after randomisation 32% of the patients refused further participation and 77% of the patients dropped out after four study weeks because of bad taste and unpleasant sensation of the paste texture in the mouth (Wijers et al, 2001). In conclusion, PTA lozenges have a positive effect on the quality of feeding and the amount of weight loss but cannot prevent severe mucositis. The presence of Candida species and aerobic Gram-negative bacilli has no effect on the development and severity of radiation-induced mucositis. Based on our findings of this randomised clinical trial, we do not recommend this type of supportive care for the reduction or prevention of radiation mucositis.\n\n**Question:** Compared to Placebo lozenge what was the result of PTA lozenge on Aerobic Gram-negative bacilli colonisation at 3-5 weeks?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
509
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Improvements in survival of the uncemented Nottingham Total Shoulder prosthesis: a prospective comparative study\n\n ABSTRACT.BACKGROUND:\nThe uncemented Nottingham Total Shoulder Replacement prosthesis system (Nottingham TSR) was developed from the previous BioModular® shoulder prosthesis taking into consideration the causes of the initial implant's failure. We investigated the impact of changes in the design of Nottingham TSR prosthesis on its survivorship rate.\n\nABSTRACT.METHODS:\nSurvivorship analyses of three types of uncemented total shoulder arthroplasty prostheses (BioModular®, initial Nottingham TSR and current Nottingham TSR systems with 11, 8 and 4 year survivorship data respectively) were compared. All these prostheses were implanted for the treatment of disabling pain in the shoulder due to primary and secondary osteoarthritis or rheumatoid arthritis. Each type of the prosthesis studied was implanted in consecutive group of patients – 90 patients with BioModular® system, 103 with the initial Nottingham TSR and 34 patients with the current Nottingham TSR system. The comparison of the annual cumulative survivorship values in the compatible time range between the three groups was done according to the paired t test.\n\nABSTRACT.RESULTS:\nThe 8-year and 11-year survivorship rates for the initially used modified BioModular® uncemented prosthesis were relatively low (75.6% and 71.7% respectively) comparing to the reported survivorship of the conventional cemented implants. The 8-year survivorship for the uncemented Nottingham TSR prosthesis was significantly higher (81.8%), but still not in the desired range of above 90%, that is found in other cemented designs. Glenoid component loosening was the main factor of prosthesis failure in both prostheses and mainly occurred in the first 4 postoperative years. The 4-year survivorship of the currently re-designed Nottingham TSR prosthesis, with hydroxylapatite coating of the glenoid baseplate, was significantly higher, 93.1% as compared to 85.1% of the previous Nottingham TSR.\n\nABSTRACT.CONCLUSION:\nThe initial Nottingham shoulder prosthesis showed significantly higher survivorship than the BioModular® uncemented prosthesis, but lower than expected. Subsequently re-designed Nottingham TSR system presented a high short term survivorship rate that encourages its ongoing use\n\nBODY.BACKGROUND:\nInflammatory or degenerative processes of glenohumeral joint lead to pain and restriction of movements of the shoulder. Prosthetic replacement of the glenohumeral joint has gained in popularity because of its efficacy in relieving pain. The pioneering successful prostheses for total shoulder arthroplasty have been based on an unconstrained design, i.e. a metal spherical head component fixed to a metal intramedullary stem articulating with a high-density polyethylene socket (Table 1). These components are stabilized in the adjacent bone using polymethylmethacrylate bone cement [1]. The important cause for failure of the cemented prostheses was related to the glenoid component, with a 0.01–6% rate of loosening [2-4]. Table 1 Long term survivorship data on cemented and the outcome of a large series of a cementless total shoulder replacement prostheses Reference Type of prosthesis No. of Patients Survivorship End point criteria Glenoid failure rate Overall failure rate Tarchia, Cofield & Settergren [5] Neer I & II cemented 113 [31 = OA, 36 = RA 12 = 2ary OA] 10years = 93% 15years = 87% Revision – severe pain, abd < 90°, ext rot < 20° 7/113 14/113 Brenner, Perlic, Clayton & Dennis [8] Neer II & Gristina cemented 51 [37 = OA 14 = RA] 11years = 75% Severe pain, radiographic evidence of component loosening 3/51 6/51 Cofield [6] Cofield cementless 180 [110 = OA 28 =RA 30 = 2ary OA 12 = revisions] Not calculated Revision 5/180 12/180 Pfahler et al [4] Aequalis cemented 705 [418 = OA 107 = RA 180 = 2ary OA] Not calculated Revision 9/705 43/705 OA = osteoarthritis, RA = rheumatoid arthritis The long term survivorship of the prosthesis developed by C. Neer for the cemented total shoulder arthroplasty (TSA) is almost the single one with well documented outcomes [5] with 87% fifteen year survivorship rate for Neer I & II cemented shoulder prostheses. This implant has become the gold standard, against which all the successive prosthetic designs are compared. Further developments of TSA implants have been aimed at enhancing longevity by addressing the following three most critical issues: (1) Improving the incorporation of the glenoid component using a more \"biological\" type of fixation in order to reduce the rate of mechanical loosening; (2) Designing a better glenoid component to achieve the lowest possible rate of wear; (3) Finding the best method for the fixation of the humeral component while allowing good preservation of the humeral bone stock, taking into consideration need for possible future revision surgery. These goals can potentially be achieved using an uncemented design, with press fit and/or tissue in-growth porous coating of the metal, at its bone interface. There is evidence that the porous coating at the proximal part of the stem is superior to the press fit design [6], possibly because of lesser stress shielding of the proximal humerus and subsequently less bone resorption, and preservation of the proximal humeral bone stock. Biological fixation of the glenoid component currently requires the use of a metal backing or metal base-plate that serves as the \"bone in-growth\" surface and results in a more even distribution of the compression forces on the bone. On this baseplate a high-density polyethylene insert is mounted, either molded onto the metal or fixed using some form of fastening mechanism at the time of surgery. This bearing should be at least 3 mm thick (at its thinnest part) to reduce polyethylene wear [2,7]. Currently there are very few long-term peer reviewed large series survivorship data on cemented TSAs [5,8] and no survivorship data on cementless designs (Table 1). Survivorship studies with commonly acceptable clinical outcome criteria are important, but currently there is no uniform agreement on these types of criteria. Most authors consider revision of the prosthesis an end point in its survivorship [9]. Furthermore published reports on single prosthetic designs so far have only provided short-term postoperative follow-up data or a small number of patients. All these factors result in wide confidence intervals in survivorship tables and lead to difficulty in drawing a meaningful interpretation of the results [9]. In spite of these problems we can reach some tentative conclusions from TSA outcome by different authors (Table 1). The best long-term outcome is that reported for the Neer II cemented prosthesis, with a 93% ten years survivorship. Short-term glenoid failure, requiring implant removal, reaches the rate of around 6% for cemented designs and 3% for cementless designs. Glenoid failure is the cause of between 20% – 50% of all failed TSAs, cemented or cementless. Survivorship of cemented TSA is highest in patients with rheumatoid arthritis. We are nor aware of survivorship data available for rheumatoid patients with cementless designs. It is logical to conclude that any cementless prosthetic design should possess at least the best survivorship characteristics of the cemented prostheses in order to be considered as an alternative. In order to achieve the goal of the desired survivorship rates of cementless TSA a chain of modification of the initial BioModular® prosthesis was employed with eventual evolvement of the current Nottingham TSR design. The main characteristics of this prosthesis are: (1) The use of an indexable offset modular head, to improve the anatomical configuration of the implant and the optimal soft tissue balancing [10,11]. (2) A porous proximal stem in order to eliminate the stress shielding effect. (3) Conformed radii of humeral and glenoid components in order to reduce point loading and point wear of the polyethylene glenoid liner [12]. (4) Hydroxyapatite lining of the glenoid baseplate-bone interface in order to provide a \"biologic\" milieu to improve an osseo-integration [13]. (5) An improved capture mechanism for holding the polyethylene bearing onto the base-plate in order to reduce the liner disengagement rate. Following these design changes we hypothesize that the TSA with cementless implantation will present improved survivorship rates. Therefore in order to estimate the improvement in a performance of the prostheses design we have compared survivorship data of the BioModular® prosthesis – with or without the \"Wallace\" prototype offset head (Fig 1) – with the survivorship of the initial design of the Nottingham TSR (Fig 2) and the latest design of the Nottingham TSR with the glenoid component base-plate covered by hydroxyapatite (Fig 3). We show that Nottingham TSR uncemented prosthesis has better short and midterm survivorship than the BioModular® uncemented design. Figure 1The BioModular® total shoulder arthroplasty prosthetic design is shown. (A) A titanium BioModular® stem. (B) An offset \"Wallace\" head. (C) The glenoid trays: on the right – the low-profile version, top row – view from the side, bottom row – view into the tray, showing the glenoid liner capture mechanisms. Figure 2The Nottingham TSR total shoulder arthroplasty prosthetic design is shown. (A) A chrome cobalt stem. (B) A glenoid tray showing the capture mechanism for the polyethylene liner. (C) A glenoid tray seen from the back, showing the areas for bone in-growth. (D) An off-set head with a morse taper assembling interface. Figure 3The Nottingham TSR total shoulder arthroplasty prosthetic design with glenoid base-plate coated with hydroxyapatite. (A) A lateral view of the base-plate with fixation screws and with mounted polyethylene liner. (B) A view on the bone interface side of the base-plate covered by hydroxyapatite.\n\nBODY.METHODS:\nSurvivorship analyses of three types of uncemented TSA prostheses, implanted for the treatment of disabling pain in the shoulder due to primary and secondary osteoarthritis or rheumatoid arthritis, were compared. Each type of the prosthesis studied was implanted in consecutive group of patients, i.e. Group 1: the BioModular® TSA prosthesis, Group 2: The initial Nottingham TSR prosthesis and Group 3: The most recently redesigned Nottingham TSR prosthesis. The indication for surgery and criteria for inclusion in the study were pain in the shoulder with functional disability combined with radiographic evidence of an advanced destruction of the humeral and glenoid articular surfaces. Patients compatible with these criteria but who were medically unfit for surgery, due to advanced systemic disease, were not offered the procedure and were therefore excluded from the study. Group 1 was comprised of 90 patients who were operated between 1989–1994 (15 men and 75 women, mean age 61 years, range 19 – 92 years). These patients had the uncemented BioModular® Total Shoulder Prostheses implanted, either with the standard non-offset head or the prototype (\"Wallace\") offset humeral head. The mean follow up period in this group was 8.8 years. Group 2 patients were treated with the initial Nottingham TSR cementless prosthesis where the glenoid component had no hydroxyapatite backing. This group included 103 patients – 12 men and 91 women, average age of 58 years (range: 20–84 years). This design has been used between 1994 and 1997 and the patients had a mean follow up period of 6.4 years. Group 3 comprised of 34 patients, who had a hydroxyapatite coated glenoid component base plate implanted, as part of their most recent Nottingham TSR. In this group there were two men and 32 women, with a mean age of 64 years (range 31–89 years). These patients were operated in 1998–99 and had a mean follow up of 3.2 years. The characteristics of the different groups of patients are given in the Table 2. All these patients were included in the survivorship analyses. Table 2 Characteristics of the study groups Study Group Age (years) Male/Female Mean follow up (years) OA RA 2 nd ary OA Total Number Group 1 Mean: 61 Range: 19–92 15/75 8.8 48 (1 with RC tear) 31 (2 with RC tear) 11 90 Group 2 Mean: 58 Range: 20–84 12/91 6.4 47 36 (2 with RC tear) 20 103 Group 3 Mean: 64 Range: 31–89 2/32 3.2 19 12 (1 with RC tear) 3 34 OA = osteoarthritis, RA = rheumatoid arthritis, RC – rotator cuff muscle For the comparison of the three TSA prostheses' clinical outcome we used a survivorship analysis according to the method described by Murray et al [14] which is based on Rothman's formula for the confidence limits determination. The criterion for failure in this series was revision surgery requiring removal or exchange of either part of or a whole prosthesis. The indications for these re-operations were: (1) An increased level of pain during follow-up, that appeared to be related to the implant, with restriction of external rotation to under 20° and abduction to under 60° and/or newly developed radiolucency around the glenoid peg or complete peri-prosthetic radiolucency at the metal-bone interface, more then 2 mm in width, around either the humeral or glenoid components [7]. The radiographic evaluation was done by the surgical team and by the radiologist; (2) Deep wound infection; (3) Migration of any of the prosthetic components. For the purpose of postoperative follow up and identification of the possible failure of the implants the patients were monitored annually. This review evaluation included estimation of the level of pain using a Visual Analog Scale, a clinical examination of the range of movements of the shoulder and radiographic evaluation of the shoulder with an anterior – posterior view and an axillary view to allow assessment of the alignment and position of the components, the presence of any change in position over time and measurement of any radiolucency at the prosthesis-bone interface. Information on patients who died during the follow up period, which is included in the survivorship analysis, was obtained either from the Registrar for Births, Marriages and Deaths or from the hospitals' registration systems and through direct contact with General Practitioners or relatives. The comparison of the annual cumulative survivorship values (quantitative type of variables) in the compatible time range between the three groups was done according to the paired t test and the p < 0.05 was considered significant. The TSA operations were performed through a proximally extended deltopectoral approach with a longitudinal clavicle osteotomy and a lesser tuberosity osteotomy [15]. This approach was used to facilitate glenoid exposure and to ensure stable deltoid and subscapularis muscle reattachment, that allows early postoperative shoulder mobilization [16]. The authors prefer this surgical approach also because it protects the deltoid muscle during retraction. After humeral head resection, humeral stem alignment was established using the anatomical neck as a guide to prosthesis placement, preserving the rotator cuff. Humeral medullary canal and glenoid surface preparation were carried out using specially designed reamers. With the standard BioModular® stems either a standard or an offset prototype modular head was used. The head geometry, apart from the off-set feature, was identical in the two types of prostetic heads used with the BioModular® implant. In the Nottingham TSR system humeral stems in four sizes, offset modular heads in five sizes and glenoid bearings in three thicknesses were available for optimal component fitting and soft tissue balancing. In six patients (three in the Group 1, two in the Group 2 and one patient in the Group 3 – Table 2) tears in rotator cuff muscles were identified and firmly repaired. Five of six patients with rotator cuff muscles tears suffered from the rheumatoid arthritis. During the first three months postoperatively every patient underwent intensive physiotherapy following a standard programme aimed at improving strength and range of movements. Since the surgical procedure was carried out through a deltopectoral approach with a lesser tuberosity osteotomy and its strong reattachment, an immediate postoperative rehabilitation was possible with very few limitations. Usually a work on external rotation and elevation was commenced from the first postoperative day with isometrics, passive and active elevation and external rotation up to the level of movement achieved during the surgery. Patients used the broad arm sling intermittently only during the first 2–3 postoperative days. The exercise programme was increased as the patients gain confidence and pain relief aiming to achieve a maximal possible active and passive range of movements.\n\nBODY.RESULTS:\nThe eight year survivorship of the initial Nottingham TSR design (Group 2) was higher (p < 0.001) than observed in Group 1 patients with implanted BioModular® prostheses (Fig 4,5). The eight-year cumulative survivorship in the Group 2 was 81.8% and remained constant from the sixth postoperative year (Fig 5). The eight – and ten- year cumulative survival rates of Group 1 (BioModular® prosthesis) were 75.6% and 71.7% (Fig 4). The main causes of failure of the BioModular® prosthesis were related to the glenoid component, i.e. aseptic glenoid component loosening in 13 patients and uncoupling of the polyethylene bearing liner in 4 patients, overall 71% of failed cases (Table 3). Seventy one percent of the failed cases occurred during the first four postoperative years (Table 3) showing four year cumulative survivorship rate of 80.9% (Fig 4). The main drop in survivorship of the initial Nottingham TSR occurred also during the initial four postoperative years and was mainly due to glenoid component failure (11 of the 17 failed cases, Table 4). About half of the failures in Group 2 during the eight years of the survivorship analysis were due to aseptic loosening of the glenoid base-plate in eight patients, six of these patients were treated for primary or secondary osteoarthritis (Table 5). Table 3 Time distribution of the occurrence of prostheses failure in the Group 1 (patients operated in 1989 – 94 with BioModular ® uncemented TSA) according to the mode of failure Year Post Op Glenoid Loosening Bearing failure Infection Dislocation 1 3 1 0 1 2 1 1 0 1 3 0 0 1 3 4 3 1 1 0 5 1 0 0 0 6 2 0 0 0 7 2 0 0 0 8 0 0 0 0 9 1 0 0 0 10 0 1 0 0 11 0 0 0 0 Total 13 4 2 5 Table 4 Time distribution of the occurrence of prostheses failure in the Group 2 (patients operated in 1994 – 97 with the initial Nottingham TSR prosthesis) according to the mode of failure Year Post Op Glenoid Loosening Bearing failure Infection Dislocation 1 0 1 0 0 2 0 3 0 1 3 2 1 0 2 4 4 0 0 1 5 0 0 0 0 6 1 0 0 0 7 1 0 0 0 8 0 0 0 0 Total 8 5 0 4 Figure 4This graph shows the survival results of BioModular® total shoulder arthroplasty (vertical bars represent 95% confidence intervals). Figure 5This graph shows the survival rate of the initial design for the Nottingham TSR (vertical bars represent 95% confidence intervals). Table 5 Failed uncemented Nottingham TSR prostheses in Group 2 (patients operated in 1994 – 97) according to their mode of failure Component Mode of Failure Loosening Infection Malposition: bearing failure/head dislocation, stem malposition Glenoid 8 (OA = 4, RA = 2, 2 nd ary OA = 2) 0 7 [OA = 4, 2 nd ary OA = 3] Humeral 0 0 2 [2 nd ary OA = 2] OA = osteoarthritis, RA = rheumatoid arthritis Survivorship in patients with rheumatoid arthritis (RA) in both groups 1 and 2 was higher (p < 0.001) than in patients with osteoarthritis. The patients with RA who had the BioModular® prosthesis implanted, presented a 96.8% and 93.1% cumulative five and eight year survivorships respectively (Fig 6) and the RA patients with the initial Nottingham TSR had presented a constant 94.4% cumulative survivorships from the fourth to eights year postoperatively (Fig 7). Figure 6This graph shows the survival rates of the BioModular® total shoulder arthroplasty in patients with RA (vertical bars represent 95% confidence intervals). Figure 7This graph shows the survival rate of the initial design of the Nottingham TSR in patients with RA (vertical bars represent 95% confidence intervals). The initial Nottingham TSR prostheses showed higher eight year survivorship than BioModular® prostheses in patients with OA (p < 0.01). Survivorship of the BioModular® prosthesis in patients with primary osteoarthritis was 70.9% and 64.3% at five and eight years respectively and for the initial Nottingham TSR these values were 84.5% and 80.4% (Fig 8,9). Figure 8This graph shows the survival rate of the initial design for the Nottingham TSR in patients with OA (vertical bars represent 95% confidence intervals). Figure 9This graph shows the survival results of the BioModular® total shoulder arthroplasty in patients with OA (vertical bars represent 95% confidence intervals). The observed four-year survivorship of the patients in Group 3 was also significantly higher than the survivorship rates of the BioModular® prosthesis observed after the first four years after implantation (p = 0.02), with a 93.1% cumulative four year survivorship of the re-designed Nottingham TSR (Fig 10) comparing to the 80.9% of cumulative four year survivorship of the BioModular® prosthesis (Fig 4). Among the Group 3 patients only two prostheses failed, both in patients with osteoarthritis (Table 6). One failure was due to glenoid bearing disassembly, two years after the operation, and the other due to glenohumeral dislocation three years after the operation (Table 7). Table 6 Failed uncemented Nottingham TSR prostheses in the Group 3 (patients operated in 1998 -99) according to the mode of failure Component Mode of Failure Loosening Infection Malposition: bearing failure/head dislocation Glenoid 0 0 2 (OA = 2) Humeral 0 0 0 OA = osteoarthritis Table 7 Time distribution of the occurrence of prostheses failures in the Group 3 (patients operated in 1998 – 99 with the current Nottingham TSR prosthesis) according to the mode of failure Year Post Op Glenoid Loosening Bearing failure Infection Dislocation 1 0 0 0 0 2 0 1 0 0 3 0 0 0 1 4 0 0 0 0 Total 0 1 0 1 Figure 10This graph shows the survival of the current Nottingham TSR with a hydroxyapatite coated glenoid baseplate (vertical bars represent 95% confidence intervals).\n\nBODY.DISCUSSION:\nThe evolving cementless TSA prostheses should present longevity that is comparable with or better than conventionally used cemented implants. We evaluated the short and midterm survivorship of the Nottingham TSR cementless prosthesis, with comparison to the survivorship of the BioModular® TSA prosthesis from which the Nottingham TSR evolved, in order to estimate the ability of these implants to achieve the desirable survivorship rates. The transitions from one design to another occurred following recognition of causes of implant failure. The use of the offset head on the standard BioModular® stem design did not prevent a considerably high loosening rate. The subsequent change to a conforming design with an identical radii of curvature of the humeral head component and glenoid component and to a different glenoid component with a conical peg, allowing press-fit implantation, led to significant improvement in the middle term survivorship among the patients who had an implant of the initial Nottingham TSR design. We are aware of suggestions by other authors, that full conformity between the components may lead to an enhanced stress on the globoid rim due to loss of the humeral head translation possible in the normal shoulder joint, and as a consequence a higher risk for prosthetic loosening. [12,17]. Since this hypothesis has been raised following cadaveric studies and in vivo radiographic evaluation and has been never confirmed in clinical trials and since the exact degree of optimal mismatch of the glenoid and the humeral head radii is not known, the authors preferred the conforming design. This choice of conformity is based on the hypothesis that mismatch of the glenoid and humeral component curvature can lead to a considerable rate of polyethylene wear due to uneven force distribution between the components and point loading and point wear of the polyethylene [12]. The midterm survivorship in the Group 2 patients shows that the original Nottingham TSR had less favorable results when compared with the existing published survivorship studies on the Neer I & II cemented implants [5], with the outcome studies of the Cofield uncemented prostheses [6] and with the \"Aequalis\" cemented prosthesis [4], but is comparable with the study on the Neer II and Gristina prostheses [8]. The comparison with the latter study is more realistic because the groups of patients are more comparable to our patients, i.e.both series consist mainly of patients with primary osteoarthritis, smaller groups of patients with rheumatoid arthritis and some patients with secondary osteoarthritis, and they have similar indications for failure recognition. Although the midterm survivorship of the initial Nottingham TSR design showed a significant improvement over the survivorship rate of the BioModular® prosthesis, especially in the patients with osteoarthritis, this improvement did not reach the desired values of survivorship above 90% that has been reported for the cemented Neer I&II prostheses. We note that the overall lower than desired midterm survivorship rates in the Group 2 patients are due to less favorable performance of the Nottingham TSR prosthesis in patients with primary osteoarthritis, who had an eight year survivorship rate of only 80.4%. The interesting finding is the evidence of a significantly higher (and in the desirable range of above 90%) survivorship in patients with rheumatoid arthritis compared to the patients with osteoarthritis in both Group 1 and Group 2. The difference in the survivorship between patients with osteoarthritis and rheumatoid arthritis is consistent with observations in other reports on other prosthetic shoulder arthroplasty systems [8] and can be explained by the lower level of demand placed on a shoulder prostheses in a rheumatoid patient. In spite of the improved survivorship in the initial design of the Nottingham TSR it still showed an unsatisfactory short term loosening rate. It should be realized that the two most serious complications of any prosthetic surgery, e.g. deep wound infection and periprosthetic fracture, did not occur among any of the patients treated by the initial design of the Nottingham TSR. This can probably be attributed to the use of an appropriate surgical technique and instrumentation for the prosthesis implantation. The glenohumeral dislocation rate appeared to be similarly low in Group 1, five of 90 patients, and Group 2, four of 103 patients, (Tables 3,4). This finding shows that the tissue balancing technique for the Nottingham TSR prosthesis implantation, i.e. systematic intraoperative evaluation of an adequate free subacromial space, flush and stable alignment of the glenoid and humeral components with correct offset of the humeral head, adequate anterior – posterior laxity and ability to reattach the lesser tuberosity with the arm in external rotation without loosing of the desired glenohumeral reduction, was effective. The main mode of failure of the prostheses in Group 2 patients was aseptic loosening of the glenoid component predominantly occurring in the first four postoperative years and the second most important cause of failure cause bearing disassembly during the same postoperative period (Table 4). After having identified the two main causes of failure for the intermediate design of the Nottingham TSR prosthesis, steps were taken to change of the design of the metal base-plate of the glenoid component. To achieve an optimal bone osseointegration coating of the implant with hydroxyapatite was introduced. In order to eliminate glenoid bearing disassembly improvement of the capture mechanism for the glenoid bearing was implimented. When looking at the data for Group 3, the addition of hydroxyapatite to the porous coating of the glenoid base-plate has eliminated the original 3.9–5.6% rate of aseptic glenoid loosening from the second to the fourth postoperative years (Tables 3, 4, 7). Additionally by improving the capture mechanism of the glenoid bearing its disassembly has now became rare (Tables 3, 4, 7). Following these changes in design the four-year survivorship of the Nottingham TSR prostheses in the Group 3 patients showed a satisfactory 93.1 % rate, which is significantly higher than the four-year survivorship rates of the BioModular® system. Since in the Groups 1 and 2 the deterioration in the survivorships occurred predominantly in the first four postoperative years, the present high four-year survivorship rate of the newly designed prosthesis might indicate on a sustained long-term improvement of the prosthesis survivorship. Since Group 3 comprised of only 34 patients we have not subdivided and compared the subgroups according to the underlying pathology as we did in Groups 1 and 2. Furthermore, because the number of patients in Group 3 is not sufficient for the adequate power of the statistical comparison with the survivorship in Group 2 patients we can only suggest that the further improvement of the present Nottingham TSR system survivorship is likely to be seen. Future long-term survivorship studies will verify this point more precisely. However, with regard to its initial design (Group 2), there is a clear evidence that the cementless Nottingham TSR system led to a significant improvement in its midterm survivorship comparing to its predecessor, the BioModular® prosthesis.\n\nBODY.CONCLUSION:\nWe have shown that the discussed cementless TSA prosthesis design (Nottingham TSR), following a chain of modifications according to recognition of previous causes of failure, have reached short term survivorship rates that are comparable to the conventional cemented designs. Therefore, in the light of potential long term \"biological\" advantages of uncemented implants, these results are encouraging for the ongoing use and development of this type of prosthesis.\n\nBODY.ABBREVIATIONS:\nTSA – total shoulder arthroplasty TSR – total shoulder replacemen\n\nBODY.COMPETING INTERESTS:\nThe authors has received research funding from Biomet Merck Ltd.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nNR – processed the data and wrote the paper. LN – collected and analyzed the data. AM – collected the data. IJM – collected the data. WAW – collected and analyzed the data.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to BioModular® shoulder prosthesis what was the result of Redesigned Nottingham TSR prosthesis on 4-year survivorship?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
511
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Comparative Study of Low Doses of Rosuvastatin and Atorvastatin on Lipid and Glycemic Control in Patients with Metabolic Syndrome and Hypercholesterolemia\n\n ABSTRACT.BACKGROUND/AIMS:\nThis multicenter, open-labeled, randomized trial was performed to compare the effects of rosuvastatin 10 mg and atorvastatin 10 mg on lipid and glycemic control in Korean patients with nondiabetic metabolic syndrome.\n\nABSTRACT.METHODS:\nIn total, 351 patients who met the modified National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) criteria for metabolic syndrome with low-density lipoprotein cholesterol (LDL-C) levels ≥ 130 mg/dL were randomized to receive either rosuvastatin 10 mg (n = 173) or atorvastatin 10 mg (n = 178) for over 6 weeks.\n\nABSTRACT.RESULTS:\nAfter 6 weeks of treatment, greater reductions in total cholesterol (- 35.94 ± 11.38 vs. - 30.07 ± 10.46%, p < 0.001), LDL-C (48.04 ± 14.45 vs. 39.52 ± 14.42%, p < 0.001), non-high-density lipoprotein cholesterol (- 42.93 ± 13.15 vs. - 35.52 ± 11.76%, p < 0.001), and apolipoprotein-B (- 38.7 ± 18.85 vs. - 32.57 ± 17.56%, p = 0.002) levels were observed in the rosuvastatin group as compared to the atorvastatin group. Overall, the percentage of patients attaining the NCEP ATP III goal was higher with rosuvastatin as compared to atorvastatin (87.64 vs. 69.88%, p < 0.001). Changes in glucose and insulin levels, and homeostasis model assessment of insulin resistance index were not significantly different between the two groups. The safety and tolerability of the two agents were similar.\n\nABSTRACT.CONCLUSIONS:\nRosuvastatin 10 mg was more effective than atorvastatin 10 mg in achieving NCEP ATP III LDL-C goals in patients with nondiabetic metabolic syndrome, especially in those with lower NCEP ATP III target level goals.\n\nBODY.INTRODUCTION:\nMetabolic syndrome consists of a group of cardiovascular risk factors, namely dyslipidemia, high blood pressure (BP), abdominal obesity, and insulin intolerance, whose concurrent appearance increases the risk of atherosclerotic cardiovascular disease [1]. Using the modified National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III [ATP III]) criteria, the prevalence of atherosclerotic cardiovascular disease is estimated to be as high as 24.8% in Korea and is continuing to rapidly increase to epidemic proportions [2]. Elevated cholesterol levels have also been shown to be a strong risk factor for the development of coronary heart disease (CHD). This clustering of risk factors may interact synergistically to affect atherosclerosis and cardiovascular events [3]. Current guidelines for lipid management stress the importance of low-density lipoprotein cholesterol (LDL-C) levels as the primary goal of therapy [4]; however, a high proportion of patients, especially those having high lipid levels, do not achieve their target LDL-C levels despite lipid-lowering therapy [5,6]. Statins effectively lower blood cholesterol levels and reduce the risk of cardiovascular events in many patient types, and are therefore recommended as first-line agents for lowering LDL-C levels [4,7]. Statins also improve other aspects of the lipid profile, such as increasing high-density lipoprotein cholesterol (HDL-C) and lowering triglyceride levels to some extent. Rosuvastatin is a highly effective HMG-CoA reductase inhibitor, which was registered in 2002 in Korea. Rosuvastatin use has been previously shown in numerous studies to be associated with greater LDL-C level reductions as compared to atorvastatin, simvastatin, or pravastatin use [8-10]. The primary objective of the current trial was to compare the effects of rosuvastatin 10 mg with that of atorvastatin 10 mg, which are the lowest-dose tablets available, on the percentage of patients who reach the NCEP ATP III LDL-C goal and safety in subjects with nondiabetic metabolic syndrome after 6 weeks of treatment. The secondary objective was to compare the effects of rosuvastatin with that of atorvastatin on glucose control and insulin resistance.\n\nBODY.METHODS.STUDY DESIGN:\nThis 6-week, multicenter, randomized, open-label, parallel-group, single-dose trial (NCT00335699) was designed to compare the efficacy of a single dose of rosuvastatin and atorvastatin in patients having nondiabetic metabolic syndrome with dyslipidemia (Fig. 1). The study was conducted from August 2005 to January 2006 at 20 medical centers in Korea. The study included a 6-week dietary run-in period before randomization, followed by a 6-week treatment phase. Subjects entering the run-in period were asked to follow the NCEP Step I diet and required to discontinue any previous lipid lowering therapy. Following the dietary lead-in period, patients with fasting LDL-C levels ≥ 130 mg/dL to < 220 mg/dL were selected and randomly assigned to two parallel treatment groups. At baseline, eligible subjects were randomized 1 : 1 to receive either rosuvastatin (Astra-Zeneca Korea, Seoul, Korea) 10 mg or atorvastatin (Pfizer Pharmaceuticals Korea, Seoul, Korea) 10 mg once daily at bedtime for 6 weeks. The study drug was discontinued and subjects were removed from the study if they withdrew informed consent, became pregnant, or developed creatine kinase levels greater than 10 times the upper normal limit. The ethics committees and institutional review boards at each participating hospital approved the study protocol. All patients provided informed consent to participate in this study.\n\nBODY.METHODS.SUBJECTS:\nPatients were ≥ 18 years of age and had nondiabetic metabolic syndrome. Metabolic syndrome was defined according to the modified NCEP ATP III criteria [11], which requires at least three of the following: abdominal obesity (waist circumference): men > 90 cm (36 inches), women > 80 cm (32 inches); triglyceride levels ≥ 150 mg/dL (1.70 mmol/L); HDL-C levels: men < 40 mg/dL (1.04 mmol/L) and women < 50 mg/dL (1.3 mmol/L); BP ≥ 130 / ≥ 85 mmHg or subject receiving antihypertensive treatment; and fasting blood glucose 110 mg/dL (6.11 mmol/L) to 125 mg/dL (6.94 mmol/L). Patients were excluded if they were pregnant or had malignancy. Additional exclusion criteria included diabetes, and active arterial disease such as unstable angina, myocardial infarction, cerebrovascular accident, coronary artery bypass surgery, or angioplasty within 2 months prior to enrollment. After completing the 6-week dietary run-in period, fasting LDL-C concentrations were required to be ≥ 130 mg/dL (3.36 mmol/L) to < 220 mg/dL (5.69 mmol/L) and fasting triglyceride levels were required to be < 400 mg/dL (4.52 mmol/L).\n\nBODY.METHODS.ASSESSMENTS:\nSample analysis for efficacy endpoints was performed in the Green Cross Reference Laboratory, Yongin, Korea, which was certified by the American College of Pathology (LAP No. 6708401) and the National Committee for Clinical Laboratory Standards. Blood samples from patients who had fasted for 12 hours were collected at all investigational sites and delivered by courier to the central laboratory within 24 hours of blood draw. To assess the primary efficacy endpoint, lipid parameters such as total cholesterol, LDL-C, HDL-C, and triglyceride levels were measured during the dietary lead-in period, at randomization, and 6 weeks after treatment. Additionally, levels of apolipoprotein A-1 and B, high-sensitivity C-reactive protein (hsCRP), insulin, glucose, and hemoglobin A1c (HbA1c) were measured at randomization and at 6 weeks after treatment. LDL-C levels were calculated using the Friedewald equation (LDL-C = total cholesterol - (HDL-C + triglyceride/5). The insulin resistance index was estimated using the homeostasis model assessment (HOMA) for insulin resistance based on the following formula: fasting serum insulin (μU/mL) × fasting plasma glucose (mmol/L)/22.5. According to the NCEP ATP III guidelines, the goal LDL-C level for each patient and the proportion of patients achieving the goal in each group was assessed. Persons with CHD or CHD risk equivalent (Framingham 10-year CHD risk > 20%) had a LDL-C level goal of < 100 mg/dL. Those with multiple risk factors had a LDL cholesterol level goal of < 130 mg/dL and those with 0 - 1 risk factor (s) had a goal LDL cholesterol of < 160 mg/dL. Individual demographic data, physical findings, vital signs, and adverse events were evaluated and recorded in the given case record form. To evaluate adverse events, various laboratory assessments including blood counts, and hemoglobin, aspartate aminotransferase, alanine aminotransferase, creatine kinase, electrolyte, and creatinine levels were performed at each time point.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nOne-hundred and forty-three evaluable subjects per treatment group were required to achieve 95% power for detecting a clinically significant difference of 6% at the 5% two-sided level in percentage change from baseline in LDL-C levels at 6 weeks with an assumed standard deviation of 14% [12]. Assuming a dropout rate of 20% during the randomized treatment period, approximately 180 subjects were recruited to each active treatment group. To obtain the required number of randomized subjects (360 in total), approximately 900 subjects were assumed to be needed for screening based on a screening failure rate of 60%. The primary analysis population was the last observation carried forward on the intention-to-treat population. This included all subjects with a baseline and at least one post-baseline lipid level measurement. All numeric variables were expressed as the mean ± SEM (standard error of the mean). Efficacy endpoints were analyzed using the unpaired t-test for continuous variables and Pearson's chi-square test for frequencies with 95% confidence intervals. Multivariate logistic regression analysis was used to evaluate the predictors for reaching target NCEP ATP III LDL-C levels after treatment. Variables used for analysis included the statin used, presence of coronary artery disease and hypertension, body mass index, gender, age, waist circumference, and lipid parameters. On the basis of the actual treatment received, safety data were evaluated for all patients who received at least one dose of study medication.\n\nBODY.RESULTS.SUBJECT CHARACTERISTICS:\nIn total, 645 subjects were screened for participation in this study. Of them, 370 patients entered the dietary lead-in phase and 351 patients met the inclusion criteria and were randomly assigned to treatment with either rosuvastatin 10 mg or atorvastatin 10 mg (Fig. 1). One patient was lost to follow-up and had no safety assessment. Table 1 shows demographic data and baseline characteristics of all 350 subjects who took at least one dose of the study drug at baseline. In terms of demographic data and baseline characteristics, no statistically significant differences existed between the two treatment groups. Patients had a mean age of 60 years in the rosuvastatin group and 58 years in the atorvastatin group. Mean body weights were 66 kg in the rosuvastatin group and 66 kg in the atorvastatin group. Mean systolic and diastolic BP and waist circumstance were comparable between the two groups (Table 1). A total of five patients dropped out before efficacy assessment. Data from 346 patients were analyzed for efficacy in the intention-to-treat population defined as those who took at least one dose of study drug and had lipid levels checked at baseline and follow-up. Safety assessments were performed in 350 patients who were randomized and available for follow-up.\n\nBODY.RESULTS.CHANGES IN METABOLIC PARAMETERS:\nLipid levels, glucose levels, insulin resistance indices, and hsCRP levels at baseline and 6 weeks are shown in Table 2. In each group, atherogenic lipid parameters including total cholesterol, LDL-C, triglyceride, non-HDL-C, and apolipoprotein B had significantly decreased after 6 weeks of treatment (p < 0.001 vs. baseline). Only the atorvastatin treatment produced a modest decrease in HDL-C. Rosuvastatin treatment significantly increased HbA1c and the HOMA index; however, no significant change occurred in the atorvastatin group. Data from two groups were analyzed for an efficacy comparison in the intention-to-treat population. Baseline values of all parameters were similar between the two groups. At 6 weeks after treatment, rosuvastatin 10 mg produced a significantly greater reduction in total cholesterol, LDL-C, non-HDL-C, and apolipoprotein B levels. Otherwise, no significant differences were detected in HDL-C and apolipoprotein A-1 levels between the two groups. In addition, no significant differences were observed with respect to glucose, HbA1c, and hsCRP levels, and HOMA index between the rosuvastatin and atorvastatin groups at 6 weeks (Table 2). At 6 weeks, LDL-C absolute values decreased by 48.04 ± 14.45 mg/dL in the rosuvastatin group and by 39.52 ± 14.42 mg/dL in the atorvastatin group; the former reduction associated with rosuvastatin use was significantly larger than that with atorvastatin (p < 0.0001). Percent changes from baseline in lipid profiles after treatment for 6 weeks, including LDL-C, are shown in Fig. 2. Reductions in total cholesterol (- 35.94 ± 11.38 vs. - 30.07 ± 10.46%, p < 0.0001), non-HDL-C (- 42.93 ± 13.15 vs. - 35.52 ± 11.76%, p < 0.0001), and apolipoprotein B (- 38.7 ± 18.85 vs. - 32.57 ± 17.56%, p = 0.0019) levels were larger in the rosuvastatin group as compared to the atorvastatin group (Fig. 2).\n\nBODY.RESULTS.LDL-C TARGET ACHIEVEMENT:\nAccording to the reported CHD or/and CHD risk equivalents and/or number of risk factors and/or Framingham 10-year risk, the NCEP ATP III LDL-C target goal was determined in each patient and the success rate in reaching their target goal was analyzed after 6 weeks in each group. The percentage of patients who reached their ATP III LDL-C level goals was higher in the rosuvastatin group (87.6 vs. 69.9%, p < 0.001). Among them, patients having LDL-C target cholesterol level goals of < 100 mg and < 130 mg reached their LDL-C target level goals more frequently in the rosuvastatin group as compared to the atorvastatin group. In contrast, in patients with a LDL-C target level goal < 160 mg, more than 96% reached their target goal without a significant difference between the rosuvastatin- and atorvastatin-treated groups (Fig. 3). The overall achievement rate for NCEP non-HDL-C level target goals after 6 weeks of treatment was 76.08% in the rosuvastatin group and 58.92% in the atorvastatin group (p = 0.067).\n\nBODY.RESULTS.PERCENT CHANGES IN GLUCOSE LEVELS AND INSULIN RESISTANCE:\nPercent changes in glucose levels and insulin resistance at 6 weeks are summarized in Table 3. Changes in glucose and insulin levels were not significantly different between the two groups; however, HbA1c levels were slightly higher in the rosuvastatin group with marginal significance. To evaluate insulin resistance in the two groups, the HOMA index was calculated. At 6 weeks, the HOMA index increased in both groups and the difference between groups was not significant.\n\nBODY.RESULTS.SAFETY:\nBoth rosuvastatin 10 mg and atorvastatin 10 mg were well tolerated, with similar incidences of adverse events. During the treatment period, 13 subjects in the rosuvastatin group and 9 subjects in the atorvastatin group reported adverse events (Table 4). The most frequent adverse events in the rosuvastatin group were edema and dizziness, both with incidences of 1.16%. Only five adverse events were reported in the atorvastatin group as related to the study drug; myalgia was reported in one case (0.56%). All adverse events were mild, developed within 2 weeks after starting treatment, had no action taken, and resolved spontaneously. No drug-related adverse effects were observed in the rosuvastatin group. Also, no patient had an increase in alanine aminotransferase level > 3 times the upper limit of normal or rhabdomyolysis.\n\nBODY.RESULTS.PREDICTORS FOR LDL-C LEVEL GOAL ACHIEVEMENT AT 6 WEEKS:\nOverall, the percentage of patients who reached NCEP ATP III LDL-C target level goals was higher in the rosuvastatin group as compared to the atorvastatin group. Univariate analysis showed that patients with target LDL-C levels at 6 weeks tended to be rosuvastatin-treated and have coronary artery disease. Multivariate logistic regression analyses that included age, gender, statin, coronary artery disease, hypertension, body mass index, waist circumference, baseline total cholesterol levels, and triglyceride levels showed that rosuvastatin treatment, the presence of coronary artery disease, female gender, lower total cholesterol level, and lower LDL-C levels at baseline were independent predictors for achievement of target LDL-C levels at 6 weeks (Table 5).\n\nBODY.DISCUSSION:\nThis study evaluated the comparative efficacy of the lowest doses available for two effective statins, rosuvastatin and atorvastatin, in Korean patients with nondiabetic metabolic syndrome. Rosuvastatin 10 mg was more effective than atorvastatin 10 mg in reducing LDL-C levels in subjects with nondiabetic metabolic syndrome after 6 weeks of treatment. Consistent with the greater reductions in LDL-C levels, more patients in the rosuvastatin group achieved LDL-C level goals as compared to the atorvastatin group. Otherwise, no significant difference was observed in glucose levels and insulin resistance. Metabolic syndrome, especially in the presence of high LDL-C levels, is already known to increase the risk of cardiovascular mortality and morbidity [13]. Statins are effective in decreasing LDL-C levels in patients with dyslipidemia. Survey studies have demonstrated that in real-world settings, only 67% of patients with treated dyslipidemia reach their LDL-C target level goals [14]. In this study, rosuvastatin treatment was associated with reaching recommended LDL-C level goals in a higher percentage of patients overall as compared to atorvastatin (87.6 vs. 69.9%). In particular, rosuvastatin was more effective in patients requiring more intensive LDL-C level lowering to less than 100 or 130 mg/dL. In high-risk patients with stronger targets of LDL-C levels < 100 mg/dL, rosuvastatin brought 83% of patients in this trial to the ATP III LDL-C level goal, which was higher than achieved in other studies conducted in South-Asian (76%) and Hispanic-American (61%) patients [15,16]. Both statins, however, were effective in patients with high target LDL-C level goals < 160 mg/dL. These data highlight the importance of using highly effective statins in high-risk patients to enable them to achieve their lower NCEP ATP III LDL-C level goals. With respect to other elements of the lipid profile, improvements in total cholesterol, apolipoprotein B, and non-HDL-C levels were also significantly greater with rosuvastatin as compared to atorvastatin, whereas changes in HDL-C, triglyceride, and apolipoprotein A1 levels were similar in both treatment groups. Unlike other studies in which rosuvastatin effectively raised HDL-C levels [9,15], HDL-C levels in this study were not effectively improved in either group [9]. Metabolic syndrome is associated with an increased risk of both insulin resistance and diabetes [17]. Additionally, changes in the insulin resistance index were investigated by evaluating the HOMA index, which is a positive predictor of metabolic syndrome [18]. Studies in an animal model of insulin resistance suggest that rosuvastatin treatment increases whole-body and peripheral tissue insulin sensitivity via improved cellular insulin signal transduction [19]. In contrast, in our study conducted in nondiabetic subjects, a tendency was detected for an increased HOMA index in both treatment arms. Major changes in this parameter were attributable to high increases in insulin concentrations. The degrees of percent change in fasting glucose, insulin concentrations, and HOMA index were not significantly different between the rosuvastatin and atorvastatin treatment groups. Thus, further studies are needed to elucidate the effects of statins on glucose metabolism, insulin secretion, and insulin sensitivity under diabetic or nondiabetic conditions. A multivariate analysis was performed to determine independent predictors of LDL-C goal achievement at 6 weeks. Overall, sex, the presence of coronary artery disease, LDL-C levels, and rosuvastatin treatment were predictive of target LDL-C achievement. Among these factors, rosuvastatin was the strongest predictor, with an odds ratio of 3.26. Moreover, the presence of coronary artery disease was an independent predictor of achieving target LDL-C levels. These patients were assumed to have been more likely to take interest in diet control or exercise than patients without coronary artery disease. Although the findings of this study are provocative, this study has important limitations. Recently, intensive regimens with 80 mg of atorvastatin or 20 mg of rosuvastatin have become available in Korea and produce greater reductions in atherosclerotic lipoprotein levels, which is particularly useful in patients with established coronary artery disease or acute coronary syndrome. Further studies comparing statins across dose ranges in patients not reaching their target goal with low-dose statins are required. Additionally, although changes metabolic parameters were not the primary endpoint of this study, a trend toward differences in blood glucose levels was observed between the two statins. Further studies are needed to elucidate the metabolic effects of statins. In conclusion, this study demonstrated that rosuvastatin 10 mg is significantly more effective than atorvastatin 10 mg in reducing LDL-C levels in patients with nondiabetic metabolic syndrome, especially among those with lower NCEP ATP III target level goals. Both statins were well tolerated.\n\n**Question:** Compared to Atorvastatin what was the result of Rosuvastatin on Total cholesterol after 6 weeks of treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
897
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Evaluation of the functional efficacy of an antioxidative probiotic in healthy volunteers\n\n ABSTRACT.BACKGROUND:\nIn persons without clinical symptom it is difficult to assess an impact of probiotics regarding its effect on health. We evaluated the functional efficacy of the probiotic Lactobacillus fermentum ME-3 in healthy volunteers by measuring the influence of two different formulations on intestinal lactoflora, fecal recovery of the probiotic strain and oxidative stress markers of blood and urine after 3 weeks consumption.\n\nABSTRACT.METHODS:\nTwo 3-week healthy volunteer trials were performed. Open placebo controlled (OPC) study participants (n = 21) consumed either goat milk or by L. fermentum ME-3 fermented goat milk (daily dose 11.8 log CFU (Colony Forming Units). Double blind randomised placebo controlled (DBRP) study participants (n = 24) received either capsules with L. fermentum ME-3 (daily of dose 9.2 CFU) or placebo capsules. The faecal lactoflora composition, faecal ME-3 recovery, effect of the consumption on intestinal lactoflora, and oxidative stress markers of blood (total antioxidative activity; total antioxidative status and glutathione red-ox ratio) was measured.\n\nABSTRACT.RESULTS:\nME-3 was well tolerated and a significant increase in total faecal lactobacilli yet no predominance of ME-3 was detected in all study groups. Faecal recovery of ME-3 was documented by molecular methods only in fermented milk group, however the significant improvement of blood TAA (Total Antioxidative Activity) and TAS (Total Antioxidative Status) indices was seen both in case of fermented goat milk and capsules\", yet glutathione re-ox ratio values decreased only in case of fermented by ME-3 goat milk.\n\nABSTRACT.CONCLUSION:\nThe functional efficacy of both consumed formulations of an antioxidative probiotic L. fermentum ME-3 is proved by the increase of the intestinal lactobacilli counts providing putative defence against enteric infections and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. In non-diseased host the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of host, e.g. markers of antioxidative defence system.\n\nBODY.BACKGROUND:\nProbiotics are defined as live microbial food supplements, which beneficially influence human health [1,2]. Widely accepted probiotics contain different lactic acid producing bacteria of human origin: bifidobacteria, lactobacilli or enterococci. Nowadays the concept of functional foods, incl. probiotic food and dietary supplements implies to their ability to beneficially influence body functions in order to improve the state of well-being and health and reduce the risk of disease [2,3]. The important areas of human physiology that are relevant to functional food science according ILSI (International Life Science Institute) and FUFOSE (The European Commission Concerted Action on Functional Food Science in Europe) are besides others, the modulation of basic metabolic processes and defence against high-grade oxidative stress [4,5]. Human nutrition is clearly associated with oxidative metabolism, which beside production of energy is involved in a number of vital functions of the host. For example, under physiological conditions the reactive species (including peroxyl radicals, nitric oxide radical, superoxide anion) figure a crucial role in primary immune defense of the human body by phagocytic cells against harmful microorganisms [6,7]. On the other hand, a prolonged excess of reactive species is highly damaging for the host biomolecules and cells, resulting in dysbalance of the functional antioxidative network of the organism and leading to substantial escalation of pathological inflammation [8]. By our knowledge, no systematic studies have been performed to approve the functional efficacy of different formulations of probiotic on the antioxidative defence system of a healthy human. In our previous study Lactobacillus fermentum ME-3 (DSM 14241) [9-11], expressed strong antimicrobial activity against Gram-positive and Gram-negative entero- and uropathogens [12,13]. The cells and cell lysate of L. fermentum ME-3 possessed substantial antioxidative potency [14]. In an animal experiment ME-3 suppressed the excessive oxidative stress reaction caused by Salmonella infection in intestinal mucosa and thus improved the gut mucosal antioxidative status [15]. The antioxidative effect of L. fermentum ME-3 on human body oxidative stress markers was confirmed by our pilot study with fermented goat milk [16]. The aim present study was to evaluate the functional efficacy of the probiotic strain L fermentum ME-3 in the human gastrointestinal tract (GIT) of healthy volunteers. The faecal recovery, effect of two different formulations on total faecal lactoflora and oxidative stress markers of blood and urine were compared after 3 weeks consumption.\n\nBODY.METHODS.FORMULATIONS:\nThe efficacy of two different formulations (experimental fermented goat milk and probiotic capsules) on the human body oxidative stress markers was evaluated. Lactobacillus fermentum ME-3, a probiotic strain of healthy human intestinal origin [17], has been identified by biochemical and molecular methods [9]. The patent application has been submitted to the Estonian Patent Agency (Application No. 0356/01PV) as well as to the International Bureau of World Intellectual Property Organization (WIPO) (Application No. WO03002131) [11]. L. fermentum ME-3 was used as freeze-dried powder in capsulated form and in fermented milk.\n\nBODY.METHODS.FORMULATIONS.CAPSULES:\nGelatine coated capsules were manufactured by the Tallinn Pharmaceutical Company. The freshly prepared probiotic capsules contained 9.0 logs CFU of L. fermentum ME-3 per capsule in addition to 250 mg of saccharose and microcellulose. Identical placebo capsules contained only saccharose and microcellulose. All capsules were stored at +4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF ME-3 IN CAPSULE:\nSurvival of ME-3 in capsule was monitored during 12 months at +4°C. The content of one capsule was dissolved aseptically in 2 ml of 0.9% NaCl solution. The suspension was vortexed, serially diluted and seeded 0.1 ml on de Man-Rogosa-Sharpe (MRS) agar medium (OXOID, U.K.) and incubated 48 hours at 37°C microaerobically (10% CO2). The number of colonies was counted and the viable cell count in capsule was calculated.\n\nBODY.METHODS.FORMULATIONS.EXPERIMENTAL FERMENTED MILK:\nThree different lots of experimental fermented goat milk was prepared for the 3-week trial with healthy volunteers in order to establish the health effects of ME-3 consumption. The study group was supplied with fresh product once a week. Experimental fermented milk was prepared as described previously [16] by combining the probiotic strain with two supportive lactobacilli cultures L. plantarum LB-4 and L. buchneri S-15. L. buchneri strain S1-5 decreased the specific taste of the goat milk. L. plantarum LB-4 was included as a strong producer of exopolysaccharides, which gives the fermented milk a cream-like consistence and delightful acidity. The goat milk was inoculated with 2% mixture of Lactobacillus strains and incubated at 37°C for 24 hours. Theproduct, ready to use, was cooled and stored at 4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF L. FERMENTUM ME-3 IN FERMENTED GOAT MILK:\nTo measure the viable cell count of ME-3 in fermented goat milk, samples were taken at the end of fermentation (before cooling the product), and after 24 h, 32 h, 48 h and 7 days from the preparation, when the product was stored at 4°C. The amount of 0.5 ml of the fermented milk was serially diluted in saline and plated on MRS agar medium and incubated for 48 h at 37°C in microaerobic conditions.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS:\nTwo healthy volunteer (n 45) trials, particularly open placebo controlled (OPC) study and double blind randomized placebo controlled (DBRP) study were carried on to evaluate the functional efficacy of L. fermentum ME-3 in the human body. The inclusion criteria included the wish to participate, no known health problems, and no medical conditions requiring drug therapy, no other yogurts or no special diets. The subjects with a history of GIT disease, food allergy and acute infection, use of any antimicrobial agent within the last month or use of any regular concomitant medication including antioxidant vitamins and anti-inflammatory non-steroidal drugs were excluded. The members of the trial were daily questioned about their general welfare, intestinal function (general welfare, gut gas production, stool frequency) and putative adverse effects. The withdrawal criteria from the trials included acute infections during the study. Reasons for dropout were the unwillingness to proceed with the study or relocation to new area. The blood samples (6 ml) from the antecubital vein, faecal and urine samples were collected before and at the end of all clinical trials. Participants of all trials gave informed consent to the study protocols approved by the Ethical Committee of Tartu University.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.OPEN PLACEBO CONTROLLED FERMENTED GOAT MILK TRIAL:\nThe study participants were 5 men and 16 women, mean age 50 years (range 35–60). During three weeks of the trial the study group (3 males and 13 females) consumed daily 150 ml fermented goat milk. The daily dose of probiotic Lactobacillus strain was 11.2 to 11.8 log CFU per person. The control group (1 male and 4 females) consumed the same dose of fresh goat milk.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.PROBIOTIC CAPSULE TRIAL:\nA DBRP study (ISRCTN 53164826) was carried out as follows. The study group consisted of 15 men and 9 women, mean age 52 years (range 40–60) allocated according to their wish to participate and randomly divided by an independent person and computer program for two groups. The study group members (8 males and 4 females) took three probiotic containing capsules (8.4 log CFU per capsule) two times daily (the daily dose 9.2 log CFU) during three weeks. The placebo group (7 males and 5 females) received identical capsules without the probiotic strain. Faecal samples of all participants to assess change in faecal lactoflora and the persistence of the ingested probiotic strain were collected before and at the end of trial. Several laboratory indices of blood and urine were measured before and after the consumption of ME-3. Here we report on changes in human body oxidative stress markers as total antioxidative activity (TAA), total antioxidative status (TAS) and glutathione red-ox ratio (GSH/GSSG) from blood serum.\n\nBODY.METHODS.MICROBIOLOGICAL ANALYSES OF FAECES:\nThe total count of lactobacilli and the count of L. fermentum were evaluated in faecal samples. The faecal samples were collected at day 0 and 21 in both trials. Samples were kept at -80°C before analyzed. Serial dilutions (10-2–10-9) of the weighed faecal samples were prepared with phosphate buffer (pH 7.2) and 0.05 ml of aliquots was seeded onto MRS agar medium [17]. The plates were incubated at 37°C for 4 days microaerobically in 10% CO2 environment (incubator IG 150, Jouan, France). The catalase negative colonies were selected on the basis of typical for LAB colony morphology, cells microscopy and Gram staining. The count of Lactobacillus species was expressed in log10 colony forming units per gram faeces (log10 CFU/g) and percentage (relative share) in the total count of lactobacilli. The detection level of lactobacilli was a 3.0 log CFU/g faeces. The relative amount of L. fermentum, colonizing the gastrointestinal tract of persons in the study groups was expressed as a proportion of the total count (%), using the Bioquant program [18]). The program gives output data for every microorganism as an absolute count (log10 CFU/g) and their percentage in the total count with its normal values.\n\nBODY.METHODS.AP-PCR TYPING:\nThe putative ME-3 isolates were typed by arbitrarily primed polymerase chain reaction (AP-PCR). Genomic DNA was extracted from 24 h old cultures, cultivated on MRS agar microaerobically with the QIAamp DNA Mini Kit 50 (QIAGEN GmbH., Hilden, Germany) according to the manufacturers instructions. AP-PCR typing was done with two primers: ERIC1R (5'-ATGTAAGCTCCT GGGGATTCAC-3') and ERIC2 (5'-AAGTAAGTGACTGGGGTGAGCG -3') (DNA Technology A/S, Aarhus, Denmark). A 30 μl volume of reaction mixture consisted of 10 × PCR buffer (Fermentas, Vilnius, Lithuania), 2.5 mM MgCl2 (Fermentas, Vilnius, Lithuania), 200 μM deoxynucleoside triphosphate mixture (dATP, dGTP, dTTP and dCTP, Amersham Pharmacia Biotech, Freiburg, Germany) 0, 60 μg of each primer and 2.5 U Taq DNA Polymerase (Fermentas, Vilnius, Lithuania,) and 5 μl of extracted DNA according to Matsumiya et al. [19]. The PCR mixture was subjected to thermal cycling 35 cycles of denaturation at 95°C for 1 min, annealing at 35°C for 1 min, and extension at 74°C for 2 min, with a final extension at 74°C for 5 min with the PTC-200 thermal cycler (Eppendorf AG, Hamburg, Germany). The PCR products were separated by electrophoresis in a horizontal 2% agarose gel containing 0.1 μl/ml ethidium bromide in Tris-acetic acid-EDTA (TAE) buffer (40 mM Tris, 20 mM boric acid, 1 mM EDTA, pH 8.3) (Bio-Rad Laboratories, Hercules, USA) at constant voltage of 120 V. A 1 kb ladder (GeneRuler, Fermentas, Vilnius, Lithuania) was used as a base pair size marker. The banding patterns of isolates were visualized with UV light and compared with that of L. fermentum ME-3 strain.\n\nBODY.METHODS.MEASUREMENT OF HUMAN BODY OXIDATIVE STRESS STATUS:\nBlood serum was analysed for total antioxidative activity TAA, total antioxidative status TAS and glutathione red/ox ratio (oxidized glutathione and reduced glutathione, GSSG/GSH). TAA of the serum was assessed by the linolenic acid test (LA-test) described previously [16]. This test evaluates the ability of the sample to inhibit lipid peroxidation. TAS of the serum was measured with a commercially available kit (TAS, Randox Laboratories Ltd. Ardmore, UK) as described elsewhere [16], water-soluble vitamin E (Trolox) serving as a standard. This method is based on the inhibition of the absorbance of the ferrylmyoglobin radicals of 2,2'-azinobis-ethylbenzothiazoline 6-sulfonate (ABTS+) generated by activation of metmyoglobin peroxidase with H2O2. The cellular oxidative stress markers as total glutathione and oxidized glutathione were measured using the method of Griffith [20] as described elsewhere [16]. The glutathione content was calculated on the basis of a standard curve generated with known concentration of glutathione. Amount of GSH (μg/ml) was calculated as a difference between the total glutathione and GSSG (total glutathione – GSSG). The glutathione red/ox ratio was expressed as GSSG/GSH.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe computer program Sigma Stat for Windows 2.0 (Jandel Corporation, USA) was applied. The counts of faecal lactoflora were compared by using Student's t-test and Mann-Whitney rank sum test. Changes in oxidative stress markers of blood sera (TAA, TAS and glutathione red-ox ratio) were evaluated by Student's t-test, paired t-test and Mann-Whitney rank sum test. The choice of tests was made automatically according to the distribution of the data. Both microbial and biochemical markers were given as mean and standard deviation. One-way ANOVA test was performed to compare the effect of different formulation on TAA, TAS and faecal lactoflora parameters. Differences were considered statistically significant if the value was p < 0.05.\n\nBODY.RESULTS.SURVIVAL OF ME-3 IN FORMULATIONS:\nIn capsule after approximately 1-log drop after one week from the production of the capsules, the viable count of the probiotic strain remained stable at the level of 8.4 log CFU per capsule. Additional results have shown at +4°C the stability of the freeze-dried capsulated culture at least 17 months from the production. In fermented goat-milk the cell count of the probiotic strain varied insignificantly from 9.0 to 9.7 log CFU/ml from one preparation to the other. The viable count of ME-3 in the fermented goat milk was found to remain stable at least during 7 days of storage at 4°C.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS:\nNo dropouts were registered during volunteer trials, yet one participant was withdrawn from the probiotic capsule trial due to acute respiratory viral infection. Besides, no adverse affects in general welfare or changes in GI functionality were assessed during the trial.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.CHANGES IN TOTAL LAB COUNT:\nThe consumption of both ME-3 fermented milk and ME-3 capsule significantly increased the total count of lactobacilli in faeces as compared to the initial levels (Fig. 1). In opposite, in the group of volunteers consuming non-fermented goat-milk there was even a decrease in total LAB counts during the 3-week trial and no changes were found in capsule placebo group. Figure 1Increase of total fecal counts of lactobacilli in healthy volunteers consuming of ME-3 in fermented goat milk and probiotic capsule. 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values (Student's t-test): * p < 0.05; Significantly different from control (Student's t-test): ‡ p = 0.01\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.RECOVERY OF THE PROBIOTIC STRAIN:\nIn goat milk group L. fermentum as a species appeared in fecal samples of all individuals (n = 16) after consumption of fermented goat milk (Table 1). The AP-PCR confirmed the recovery of ME-3 in the faeces of all study group members (Fig. 2). However, in different trials the administration of ME-3 strain did not lead to the predominance of Lactobacillus fermentum species (Table 1).In the probiotic capsule trial the strain ME-3 was not detectable between L. fermentum isolates by AP-PCR. Table 1 Changes in fecal recovery of L. fermentum during healthy human volunteer trials L. fermentum Groups * Prevalence (%) † Count (log10) ‡ Proportion (%) Day 0 Day 21 Day 0 Day 21 Day 0 Day 21 Goat milk trial, ME-3 (n = 16) 25 (4/16) 100 (16/16) 7.0 ± 0.7 7.3 ± 1.4** 21 13 Control (n = 5) - 20 (1/5) - 3.6 - 28 Capsule trial, ME-3 (n = 11) 16.7(2/12) 33.3 (2/12) 4.3 ± 0.5 5.8 ± 1.6 4 9 Placebo (n = 12) 25 (3/12) 16.7 (2/12) 6.3 ± 2.5 8.0 ± 1.6 11 19 * Percentage of subjects with fecal L. fermentum inside the group ** Significantly different from the pre-treatment values (paired t-test): p < 0.001 † Median value ± SD ‡ Proportion of L. fermentum among fecal LAB Figure 2Confirmation for the survival of L. fermentum ME-3 in GIT in subjects receiving ME-3 fermented goat milk by AP-PCR in a horizontal 2% agarose gel. a) From the left: M – DNA 1 kb Ladder, Line 1 -L. bervis ATCC 14869, Line 2 – L. buchneri ATCC 4005, Line 3 – L. reuteri DSM 20016, Line 4 – L. fermentum ATCC 14931, Line 5 – L. fermentum ME-3 b) From the left: M – molecular weight marker, Line 1 ...16 – ME-3 like profiles from feces of goat milk trial study group participants\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.ANTIOXIDATIVE HEALTH EFFECT OF ME-3:\nThe positive effect on the blood ox stress markers as TAA and TAS was seen in the case of both formulations (Fig. 3). Particularly, the additive increase in goat milk group was 6% and 9% respectively as compared to control group, however only 4% for TAA and 2.5% for TAS in probiotic capsule study group as compared to placebo. Figure 3Effect of ME-3 consumption in fermented goat milk and capsules on human blood oxidative stress markers a) TAA (%) and b) TAS(mmol/l). 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values: *p < 0.05 (paired t-test); **p ≤ 0.01 (Student's t-test and paired t-test);ME-3 goat milk effect different from the effect of the ME-3 in capsule-form (ANOVA): ‡p ≤ 0.001 The effect of goat-milk consumption on the TAA and TAS values was significantlyhigher (p < 0.001) than by the consumption of the capsulated probiotic (Fig. 3). The decrease of the glutathione red-ox ratio was significant in both groups: the study Group (from 0.15 ± 0.01 to 0.11 ± 0.04 μg/ml, p < 0.01) and control (from 0.14 ± 0.03 to 0.11 ± 0.02 μg/ml, p < 0.01) in the goat milk trial. When the probiotic was consumed in capsulated form, no significant decrease was noticed in the glutathione red-ox ratio.\n\nBODY.DISCUSSION:\nOur aim was to evaluate the functional efficacy of the antimicrobial and antioxidative probiotic L. fermentum ME-3 in normal population with variable food intake. First of all the safety of the L. fermentum strain ME-3 was confirmed as no adverse side effects were registered in volunteers. Even relatively high (>1011 CFU) doses of consumed ME-3 had no negative impact on the hosts' general well-being. Lactobacillus fermentum as species, used in various food applications, has a well-established history of safe use and is evaluated as GRAS according to the Food and Drug Administration of the USA [21]. Second, a clear improvement of laboratory indices of antioxidative defence system of a healthy host was documented, using both formulations as fermented by L. fermentum ME-3 goat-milk and probiotic capsules. This effect was simultaneous with the increase of intestinal lactoflora of healthy volunteers even without necessity for faecal recovery of the strain. In the human population, persons without clinical symptoms have still a quite different health status, including stability, capacity and potency of antioxidative defence to counteract sufficiently to oxidative stress-caused adverse effects [7]. If a probiotic is able to exhibit a positive functionality on oxidative stress-related indices, it helps both to stabilize and promote the potency of the whole body antioxidative defence system in subclinical situations without disease symptoms. That in turn may have an impact for lowering the risk of atherosclerotic damage of blood vessels associated with several cardiovascular and neurodegenerative diseases [22-24]. In our study of healthy volunteers for validation of the antioxidative functionality of probiotic, four well-known oxidative stress markers of blood were chosen [25-27]. The state of the lipid fraction (including also LDL) in the antioxidative defence system of the blood is evaluated by TAA. TAS on the other hand reflects more the antioxidativity of the water-soluble fraction of the human blood. Among the measured blood sera markers both the TAA and TAS values were also improved in the two different study groups. However, there was found a significantly lower improvement of TAA and TAS values in cases of capsule than fermented goat-milk where the recovery of the strains was assessed by AP-PCR. Similarly, the reduction of the glutathione red-ox ratio was detected after the consumption of fermented by ME-3 goat-milk but not with the capsule. The crucial non-enzymatic cellular antioxidant is GSH [28] present in the millimolar range mainly in the red blood cells, liver, pancreas, kidneys, spleen, eyes, lungs and intestinal cells [29]. The oxidized form of glutathione becomes even at low concentrations toxic, and therefore in the cells the glutathione red-ox ratio is kept as low as possible. In the case of inflammation this balance is shifted towards the oxidized form, indicating non-physiological intracellular oxidative stress. Thus, our study shows that there is a good association between the mode of formulation of probiotic and expression of its functional properties inside the healthy host. The antioxidative potential of the food supplement containing ME-3 was excellent, as reisolates of the strain from capsule expressed significantly higher TAA in comparison with the base values of the strain in vitro (data not shown). Unexpectedly, the shifts in the antioxidativity markers in blood serum of participants of the probiotic capsule trial were less pronounced in comparison with ME-3 fermented goat milk. Particularly, the explanation for more expressed positive shifts in oxidative stress markers of volunteers of the fermented goat milk trial could be due to the synergistic effect of the probiotic and the substrate. Milk is not just a carrier for the probiotic Lactobacillus strain, but contains natural \"lactogenic\" factors like lactose, minerals, vitamins and other components that enhance the metabolic activity of ingested probiotic strain in GIT. Both fermented goat milk and goat milk elevated the values of TAA and TAS The goat milk contains different biomolecules (e.g. casomorphins, lactorphins, casokinins, etc) having certain antioxidative properties, which can contribute to consumers' plasma antioxidative capacity [30-36]. This was proved by some antioxidative effect also in persons consuming non-fermented goat milk. However, the elevation these indices were remarkably more expressed in the fermented goat milk group, thus the goat milk fermentation with L. fermentum ME-3 results in additive increase of total antioxidativity. Therefore, the provisional FAO regulations [37] suggesting the need for health claims by specified formulations of probiotic seem to be of the utmost importance. Additionally, in our study with experimental fermented milk the average daily dose of L. fermentum ME-3 being 11.5 logs CFU was clearly higher than that of capsule (max 9.5 log CFU). It is possible that the dose excesses the amount of bacteria necessary for interacting with intestinal mucosa and the unattached lactobacilli are excreted with faeces. The finding of Saxelin and colleagues confirmed that the faecal recovery of the probiotic strain started from the consumption of more than 9.0 log CFU daily doses of capsulated LGG [38]. To our surprise, in the present study the similar dose did not result in faecal recovery of the strain. It is possible that the ME-3 strain germinated mainly in some upper parts of intestinal tract where the advantageous conditions for survival and metabolic activity of probiotic lactobacilli were present. Using molecular tools, Marteau et al. showed that lactobacilli figuring only 7% of faecal microflora performed up to 30% of microbial communities in human colon [39]. If administered in lower quantities as in case of capsule trial, ME-3 did not reach the detectable level in faecal samples. Yet, its presence in gut was proved by the positive antioxidative health effect in blood but not in urine. Therefore it is understandable that the higher load of metabolically active probiotic bacteria in goat-milk resulted also in their faecal recovery and the highest impact on the oxidative stress indices. Moreover, in our study the positive impact of ME-3 consumption on the host lactoflora was proved by the increase of faecal lactobacilli counts in all participants of human volunteer studies. In experimental settings the high counts of intestinal lactobacilli have been shown as an important defensive factor against enteric infections [40,41]. Though up to now the period of consumption of probiotics has not been defined, the 3-week ingestion of fermented goat-milk and capsule seemed enough for reaching the aims. It is important to mention that after consumption of ME-3, a strain with high antagonistic activity, neither the species nor the strain predominated among total lactoflora. This shows a well-granted microbial balance inside the gut, which cannot be disturbed by high load of probiotic bacteria. Apparently, the interconnected advanced metabolism of large gut microbiota keeps the proportions of different species quite stable. Some other investigators have obtained similar results showing the proportional increase of different microbial populations (bifidobacteria, coliforms) after administration of Lactobacillus sp. probiotic [42,43]. Thus, the functional efficacy of different formulations of anti-infectious and antioxidative probiotic L. fermentum ME-3 were proved both by the increase of the lactobacilli counts providing putative defence against infectious agents in gut and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. Further, studies evaluating the efficacy of ME-3 as adjunct to conventional therapy in patients with atherosclerotic damages and a high-grade oxidative stress are ongoing.\n\nBODY.CONCLUSION:\nIn non-diseased host, the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of organism. In our case, the possibility for augmentation of the antioxidative defence system by the probiotic L. fermentum ME-3 in normal population can be proposed.\n\nBODY.COMPETING INTERESTS:\nMarika Mikelsaar, Mihkel Zilmer, Tiiu Kullisaar, Heidi Annuk (Hynes) and Epp Songisepp are sharing the Estonian patent application: no. EE 2001 00356 29.06.01 and International Patent application: no. WO03002131.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nEpp Songisepp, and Pirje Hütt have been in charge of the microbiological analysis. The former has been also in charge of analyzing the results and writing of the manuscript. Jaak Kals was responsible for performance and management of the volunteer trials. Tiiu Kullisaar has been in charge of the biochemical analysis and writing the manuscript. Mihkel Zilmer has conducted the biochemical estimations and writing the manuscript. Reet Mändar has been in charge of the molecular analysis and revising the manuscript. Marika Mikelsaar is the main conductor of the L. fermentum ME-3 research; for this paper she has been in charge of the clinical trial design and writing the manuscript.\n\n**Question:** Compared to goat milk what was the result of L. fermentum ME-3 fermented goat milk on total fecal counts of lactobacilli?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
650
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial to monitor the efficacy and effectiveness by QT-NASBA of artemether-lumefantrine versus dihydroartemisinin-piperaquine for treatment and transmission control of uncomplicated \n\n ABSTRACT.BACKGROUND:\nMany countries have implemented artemisinin-based combination therapy (ACT) for the first-line treatment of malaria. Although many studies have been performed on efficacy and tolerability of the combination arthemeter-lumefantrine (AL) or dihydroartemisinin-piperaquine (DP), less is known of the effect of these drugs on gametocyte development, which is an important issue in malaria control.\n\nABSTRACT.METHODS AND RESULTS:\nIn this two-arm randomized controlled trial, 146 children were treated with either AL or DP. Both groups received directly observed therapy and were followed for 28 days after treatment. Blood samples were analysed with microscopy and NASBA. In comparison with microscopy NASBA detected much more gametocyte positive individuals. Moreover, NASBA showed a significant difference in gametocyte clearance in favour of AL compared to DP. The decline of parasitaemia was slower and persistence or development of gametocytes was significantly higher and longer at day 3, 7 and 14 in the DP group but after 28 days no difference could be observed between both treatment arms.\n\nABSTRACT.CONCLUSION:\nAlthough practical considerations could favour the use of one drug over another, the effect on gametocytogenesis should also be taken into account and studied further using molecular tools like NASBA. This also applies when a new drug is introduced.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent controlled trials ISRCTN36463274\n\nBODY.BACKGROUND:\nIn response to widespread resistance of Plasmodium falciparum parasites to the commonly used drugs chloroquine (CQ) and sulphadoxine-pyrimethamine (SP), many African countries recently adopted artemisinin-based combination therapy (ACT) as first-line treatment for uncomplicated malaria. The combination artemether-lumefantrine (AL) proved to be highly effective and well-tolerated in several studies in Africa [1-5]. Disadvantages of this drug combination are the twice-daily dosing and the fact that it should be administered with a fat-rich meal [6-8] or at least a cup of soya milk [2]. In Uganda, in an area of intense malaria transmission, recurrence of parasitaemia within 28 days occurred in 29% of AL treated patients, in 8.9% adjusted by genotyping, indicating recrudescence [4]. Another ACT, dihydroartemisinin combined with piperaquine (DP), which was originally developed in China, is increasingly used in Southeast Asia [9,10]. Piperaquine is an orally active bisquinoline with a half-life elimination time of 2.5–4 weeks [9]. The drug is structurally related to CQ, but still active against highly CQ-resistant P. falciparum strains [1,11]. This relatively inexpensive drug was well tolerated and highly effective in Southeast Asia [1,7,11-13] as was the case in two studies in Africa [4,14]. Consequently, both AL and DP are considered to be amongst the most promising artimisinin-based drugs [4,14]. Artemisinin-based drugs also act on gametocytes and thus on transmission, at least in low transmission areas [15-17]. In high transmission areas of Africa, not much information is yet available on gametocytaemia after ACT treatments and on possible influence on transmission. In a comparative study of AL and DP in Uganda, the appearance of gametocytes in those who did not have gametocytes at the start of treatment was lower from day 15 to day 42 of follow up in those treated with DP than in those treated with AL [4]. A limitation of this study was the fact that gametocytaemia was assessed by microscopic examination only. It has recently been shown that sub-microscopic gametocyte densities may significantly contribute to the transmission of malaria [18]. Adequate assessment of gametocytaemia is important. Quantitative nucleic sequence based amplification technique (QT-NASBA) has been shown to be much more sensitive for the detection of gametocytes than microscopy [3,18-20]. In this study, the post-treatment prevalence of gametocytes in children in Mbita, western Kenya was assessed, after treatment with AL and DP. Microscopy and QT-NASBA for the quantification of gametocytaemia were compared and effectiveness of both drugs regarding clinical symptoms, clearance of parasites and tolerability was assessed.\n\nBODY.MATERIALS AND METHODS.STUDY SITE AND POPULATION:\nThe study was conducted in Mbita, western Kenya, at the shores of Lake Victoria during the high malaria transmission season of April-July 2007. Mbita, is an area with highly variable transmission that depends on the local environmental circumstances that can support mosquito conditions. The EIR is calculated to be 6 infectious bites per person per month [21]. Children (6 months-12 years of age) visiting the out-patient clinic of the health centre and diagnosed with uncomplicated malaria were included after informed consent from parents or guardians. Inclusion criteria were: uncomplicated P. falciparum malaria with initial parasitaemia between 1,000 and < 200,000 parasites/μl blood, axillary temperature ≥ 37.5°C (measured with a digital thermometer) or a history of fever. Children with severe malaria, mixed infection or other underlying illness were excluded from the study. In total 146 children were recruited for the present study. Ethical approval for this study was obtained from appropriate local authorities and The Kenya Medical Research Institute (KEMRI, Nairobi, Kenya) Ethical Steering Committee (SSC protocol No 948). The trial was registered as an International Standard Randomized Controlled Trial at current controlled trials (ISRCTN36463274).\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN AND TREATMENT:\nFollowing diagnosis (at day 0), the patients were randomly allocated to one of the two treatment groups following a computer generated randomization list. One group was assigned DP (Sigma-Tau, Italy) once per day for three days. One tablet of the study drug contained 20 mg of dihydroartemisinin and 160 mg of piperaquine (paediatric formulation). Treatment was according to body weight as follows: children between 4–7 kg received half a tablet per dose, those between 7–13 kg 1 tablet, 13–24 kg 2 tablets per dose and children between 24–35 kg 4 tablets. The other group was assigned to AL (Novartis Pharma, Switzerland). Each tablet contained 20 mg artemether and 120 mg lumefantrine. Patients received treatment according to bodyweight; i.e. children between 5–14 kg received one tablet per dose, those between 15–24 kg two tablets and those between 25–34 kg received three tablets per dose. Doses were given twice daily. All treatments were given with a glass of milk under direct supervision at the clinic or, for the 2nd dose of AL, at home.\n\nBODY.MATERIALS AND METHODS.OUTCOMES: EFFICACY:\nEfficacy was assessed using the WHO in vivo test with a follow-up period of 28 days [22]. At enrollment (day 0) a full clinical examination was performed; information was recorded on a case record form. At initial diagnosis (day 0) and during follow-up (day 1, 2, 3, 7, 14, and 28), finger prick blood samples were collected for microscopy, measurement of haemoglobin level and molecular analysis. Haemoglobin was measured with Hemocue 201+ analyser and cuvettes (HemoCue diagnostics B.V. Waarle, The Netherlands). Response to treatment was measured and defined according to WHO guidelines [22]. Patients showing complications or treatment failure were treated with appropriate supportive therapy. Children developing danger signs or severe malaria on day 1 or 2 of the study were withdrawn from the study, referred to the hospital, and given alternative treatment. Adverse events were recorded on the case record forms. An AE was defined as an unfavourable and unintended symptom, sign or disease. A serious adverse event (SAE) was defined as a symptom or sign that is temporally associated with the drugs administered to the patient that is life threatening or results in hospitalization, permanent and significant disability or death. SAE's were immediately reported to the ethical committee of KEMRI and the drug safety department of Sigma-Tau, Italy.\n\nBODY.MATERIALS AND METHODS.OUTCOMES: PARASITE CLEARANCE AND GAMETOCYTE DYNAMICS:\nParasite clearance and gametocyte dynamics were assessed microscopically as well as with quantitative nucleic acid sequence based amplification assay, QT-NASBA.\n\nBODY.MATERIALS AND METHODS.MICROSCOPY:\nGiemsa-stained thick and thin smears were prepared according to WHO guidelines. Two independent experienced microscopists, who were blinded to the treatment and clinical status of the patient, examined the smears for the presence of parasites and identified the observed parasite species. Parasitaemia was determined by counting the number of parasites against 200 leukocytes for the asexual stages (assuming that there are 8,000 leukocytes/μl blood). The presence of gametocytes was examined against 500 leukocytes.\n\nBODY.MATERIALS AND METHODS.QT-NASBA:\nFinger prick blood (50 ul) for NASBA analysis was collected on Whatman 903 filter paper (Whatman international Ltd. Maidston, United Kingdom) and air-dried at room temperature. Nucleic acid extraction was performed as previously described by Boom et al [23]. Real-time 18S rRNA QT-NASBA was applied to study asexual parasite clearance below microscopical threshold [20]. In order to quantify the number of parasites in blood, a 10 fold serial dilution of 106 to 10 in vitro cultured parasites/ml was used as reference and processed and analysed with NASBA. Furthermore, to assess prevalence of gametocytes below the detection limit of microscopy, QT-NASBA targeting Pfs25 mRNA as described by Schneider et al [20] was used on blot spots collected during follow-up.\n\nBODY.MATERIALS AND METHODS.GENOTYPING:\nIn order to discriminate between re-infection (RI) and recrudescence (RE), merozoite surface protein 1 and 2 (msp1 and msp2) and glutamate rich protein (GLURP) genotyping was performed as described by Snounou [24] on blood spots obtained at primary (day 0) and secondary infection (time point of re-occurrence). Blood spots were collected on Whatman 903 filter paper (Whatman international Ltd. Maidston, United Kingdom) and air-dried at room temperature for PCR analysis. DNA was isolated as described by Boom et al [23]. Molecular analysis was performed at Royal Tropical Institute, Amsterdam and was done blinded from the treatment that was given to the patients.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND STATISTICAL ANALYSIS:\nThe aim of the study was to compare gametocytaemia after AL and DP and to compare assessment of gametocytaemia by microscopical examination versus QT-NASBA All data were entered in excel and analysed with SPSS for windows (version 12.0). Parasite densities were analysed after natural log-transformation. Where appropriate, proportions were compared with the χ2-test and means were compared with the one-way ANOVA or Student t-test. A simplified trapezoid area under the curve (AUC) analysis using gametocyte data from days 0, 3, 7, 14 and 28, as a surrogate for the infectiousness of the participants in the different treatment groups, was performed.\n\nBODY.RESULTS.PATIENT RECRUITMENT:\nIn total 1882 cases suspected of uncomplicated malaria were screened for eligibility into the study during an 8-week recruitment period in April and May 2007. 1,736 children were excluded because they did not meet the inclusion criteria (Figure 1). 146 patients fulfilling the inclusion criteria entered the study; 73 were randomly allocated to the DP arm and 73 to the AL arm. Both study groups were comparable at baseline for their demographical and clinical characteristics and parasite densities (Table 1). On completion of follow up (day 28) data of 134 patients (92%) were available for analysis. Table 1 Baseline characteristics of patients included in the study at the time of enrollment in the study Characteristic DHA-PQP (n = 73) ALN (n = 73) Sex ratio male:female 33:40 40:33 Age (in months), median (IQR) 60 (44) 52 (44) Body weight, mean kg (range) 17.62 (6–37) 17.32 (6–42) Temperature, mean °C ± SD 38.1 ± 0.99 37.8 ± 0.73 Haemoglobin mmol/L ± SD 6.33 ± 1.29 6.28 ± 1.27 Parasites/μl geometric mean (range) as determined by microscopy 12145 (1000–72640) 13379 (1080–72000) Figure 1Schematic representation and flowchart of the study. Twelve patients did not reach the study endpoint. Seven patients were lost during follow up, one was unable to take oral medication, one developed severe anaemia, one did not receive the proper drugs, one withdrew from the study and one patient died.\n\nBODY.RESULTS.TREATMENT OUTCOME:\nThere were no early treatment failures during the first three days of follow up. Only one patient in the AL arm had a recurrent parasitaemia (43,880 parasites/μl) at day 28 of follow up. Genotyping analysis revealed that this patient had a reinfection with P. falciparum. All other 133 patients who completed follow-up had an adequate clinical and parasitological response. After one day of treatment, over 90% of the patients had no microscopically detectable asexual parasites. In the AL group no parasites could be detected with microscopy in any of the patients at day two. One patient was still microscopically positive at day two in the DP group with 40 parasites/μl, but this patient was also microscopically negative at day 3. The parasite reduction ratios at 48 hours reproduction cycle (parasite count on admission/parasite count at 48 hours) was 8.96 * 105 at 48 hours for the AL treatment and 2.06 * 104 at 48 hours for the DP treatment. NASBA was also applied to monitor parasite dynamics below sub-microscopical level. Humidity in some of the filter papers degraded the RNA in the blood spots of some of the samples. This led to several extraction failures. In order to have a clear picture of parasite dynamics only those series with a full range of follow-up samples, i.e. 56 DP and 54 AL treated patients, were analysed. Both treatment arms showed a steep decline in parasitaemia from the day of enrollment (day 0) to day 1; 62% reduction after DP treatment and 89% reduction in the AL arm. At day 2, the level of parasitaemia was reduced to 1.2% in the AL group and 2.75% in the DP group.\n\nBODY.RESULTS.HB CONVALESCENCE, FEVER CLEARANCE AND ADVERSE EVENTS:\nAt baseline Hb levels in both treatment groups were comparable (Table 1). At day 28 all groups had a significant increase of Hb however no significant difference between the treatments on the Hb convalescence was found. Final mean Hb levels were 7.15 mmol/l ± 1.07 for the DP treatment group and 6.79 mmol/l ± 1.24 for the AL group. A possible influence of anaemia on gametocyte carriage at enrollment was not observed in the present study (p > 0.05). Fever clearance was defined as the time from receiving the assigned treatment to the time a normal body temperature was recorded (≤ 37.5°C), in study cases who presented with fever. Fever clearance was rapid in both study groups. On day 1, 10 cases (13.7%) in the DP group presented with fever and six cases (8.2%) were observed in the AL group. In the DP group fever was observed on day 2 in four cases (5.5%) and three cases (4.1%) on day 3. In the AL group cases with fever also presented on day 2 (12 study subjects, 16.4%) and on day 3 (two individuals, 2.7%). Fever was not observed during follow-up after day 7, with the exception of the child that presented with a P. falciparum reinfection on day 28 in the AL group and the child that developed broncho-pneumonia (case presented below). Furthermore the presence of fever at recruitment was no predictor for gametocyte carriage (p > 0.05) Most adverse events were mild, self limiting and consistent with symptoms of malaria. There was no significant difference between the two study groups (Table 2). One patient died. The child (63 months) had been ill for two weeks prior to presentation at the clinic. Plasmodium falciparum infection with parasitaemia of 20,120 parasites/μl was diagnosed. There was a fever (38.6°C), but there were no other complaints and no signs of severe anaemia (Hb: 6.4 mmol/L). On day 3, there were no signs of illness. On day 7, the child presented with fever (38.6°C), cough and complaints of anorexia. There was no history of significant illness or allergies. After examination (microscopy was negative for malaria parasites), broncho-pneumonia was diagnosed and the child was treated with oral phenoxymethylpenicillin for five days and Paracetamol syrup. On day 14, the child did not attend the follow-up visit, the parents reported that the child died a day before in a local health post. The event was assessed as unrelated to the study drug. Autopsy was not performed. Table 2 Summary of adverse events recorded during the study Adverse event DHA-PQP ALN P- value Headache 43 (58.9%) 37 (50.7%) 0.318 Abdominal pain 25 (32.4%) 26 (35.6%) 0.862 Weakness 19 (26.0%) 30 (41.1%) 0.035 Anorexia 8 (10.9%) 10 (13.7%) 0.439 Diarrhea 9 (12.3%) 7 (9.6%) 0.785 Cough 16 (21.9%) 17 (23.3%) 0.843 Vomiting 11 (15.1%) 9 (12.3%) 0.806 Pruritis 4 (5.5%) 3 (4.1%) 0.698 \n\nBODY.RESULTS.GAMETOCYTE DYNAMICS:\nThe presence of gametocytes in clinical samples was assessed by microscopy and NASBA and is presented in Table 3. At the start of the study, three patients in the DP arm (4.5%) and six patients in the AL arm (9.0%) carried microscopically detectable gametocytes. Microscopical follow-up of the presence of gametocytes during the whole study period revealed that in total 39 samples (distributed over 13 patients) in the DP arm and 18 samples (distributed over nine patients) in the AL arm carried gametocytes (not significantly different). It was observed that the microscopical detection of gametocytes in blood slides during the study was subjected to fluctuations (for example a case positive on day 0, negative on day 1 and subsequently again positive at day 3), which is probably due to the fact that gametocytes circulate at low levels. However, on day 7, three patients in the DP group and 1 in the AL group showed gametocyte positive slides for the first time. On day 28, in none of the cases gametocytes were observed by microscopy. There was no difference between children older than 60 months and younger as regards carriage of gametocytes and density. Table 3 Occurrence of gametocytes as detected by microscopy or NASBA in the different study groups at the start of the study and during subsequent follow-up. Gametocyte positive samples Day 0 Day 1 Day 2 Day 3 Day 7 Day 14 Day 28 DP group microscopy (n = 67) Total number of positive cases 3 7 7 10 7 5 0 Number of new cases observed 5 2 0 3 0 0 DP group Nasba (n = 56) Total number of positive cases 22 34 a 33 a 17 8 Number of new cases observed 13 8 5 0 AL group microscopy (n = 67) Total number of positive cases 6 3 3 3 2 1 0 Number of new cases observed 1 1 0 1 0 0 AL group Nasba (n = 54) Total number of positive cases 21 20 11 12 5 Number of new cases observed 6 4 6 0 a Number of gametocyte carriers detected with NASBA is significantly higher in the DP treated group compared to the AL treated group. NASBA analysis on 56 DP treated subjects and 54 AL treated subjects detected strikingly more gametocyte carriers at the start of the study compared to microscopy; i.e. 22 study subjects in the DP arm (39.3%) and 21 in the AL (38.9%) were harbouring gametocytes before. The pfs25 NASBA revealed that 34 cases (60.7%) were gametocyte positive in the DP group on day 3, of which 13 cases were newly identified compared to day 0. In contrast, a significantly lower number of patients (20; 37%) were gametocyte positive in the AL treatment group on day 3, of which six new cases. This trend was also observed on day 7: the DP arm had 33 (58.9%) gametocyte positive samples, whereas the AL treated group had a significantly lower number of gametocyte positive samples (11, 20.3%). However on day 14 (AL: 12 positive [22.2%], six new; DP: 17 [30.4%], five new) and day 28 (AL: 5 positive [9.3%], no new cases; DP: 8 [14.3%], no new cases) of follow-up no significant difference in gametocyte carriage was observed between both treatment groups. The AUC (day 0–28) of the two treatment groups was calculated to be 20.0 infectious persons/day for the DP treatment arm and 10.5 infectious persons/day for the AL treatment arm. Stratifying the data for age under and above 60 months showed no difference in either of the groups.\n\nBODY.DISCUSSION:\nSeveral studies have analysed the efficacy and tolerability of AL and DP and all show very good results [1,4,6,7,12,14,25]. In the present study, the two drugs showed to be similar with respect to effectiveness and tolerability compared to other studies. However, most of these studies have a follow up of 42 days which makes a direct comparison of the results difficult. No adverse events other than those related to malaria itself were observed in the current study, which is in line with other reports. In the present study, all children experienced haemoglobin convalescence without difference between the two treatment arms, in contrast to the study of Kamya et al, who found a greater increase in the DP treated patients [4]. The difference in follow-up time and the numbers of patients included in both studies may be responsible for this difference. Further studies with comparable study length should be done to give an answer to these discrepancies. The effects on gametocytaemia and possibly malaria transmission deserve further study. Whereas asexual parasites were cleared in three days after the initiation of the two treatment schedules, gametocytaemia appeared different when assessed by microscopy as well as with NASBA. Gametocytes were present in low numbers throughout follow-up in both study groups. Artemisinin derivatives have in general a negative effect on gametocyte development and survival and thus influence malaria transmission, at least in low transmission areas [15-17]. In this study, the actual infectiousness of the remaining gametocyte populations in both treatment arms was not assessed; the presence of gametocytes does not necessarily mean that they actually contribute to transmission. Several studies have shown that gametocytes persist in a large population of previously infected and treated children [5,19,26,27]. A large proportion of these carriers has a parasite load below microscopical detection limit, a load that can be detected with molecular assays like NASBA. Patients with submicroscopic parasite densities may still be infectious to mosquitoes and may contribute to transmission [14,19], as confirmed with membrane feeding experiments [18,28]. Studies that include reduction of transmission as a component of efficacy of drugs, thus need to incorporate highly sensitive molecular assays to reliably assess gametocyte densities. The present study showed a limited effect of DP on gametocyte development in comparison with AL when a sensitive tool like NASBA is used for gametocyte detection, which could limit the usefulness of DP to areas with low transmission but this finding should be further investigated in larger studies in different study sites with different transmission intensities. It is not clear if plasma concentrations of dihydroartemisinin in the blood could play a role. Dihyrodartemisinin is the major and the active metabolite of artemether. So far, no studies have been performed that compare the plasma levels of dihydroartemisinin when given as such or after administration of artemether and subsequent metabolisation. This should be further investigated together with effect on gametocytogenesis, which should incorporate a sensitive detection tool for gametocytes such as NASBA. The effect that drugs can have on gametocyte clearance as measured with NASBA could have some implications for the introduction drugs and especially the introduction of new drugs. This study showed that with sensitive detection tools a difference in parasite clearance can be observed but these results should be confirmed in larger studies and in other study areas with different malaria transmission intensities. Transmission intensity varies significantly in the different African countries and within a country high and low transmission areas can often be identified. Malaria endemic countries generally have a national malaria drug policy for the whole country. Although this is logical from a practical and logistical point of view, it may not be the best approach for effective malaria control. It could, therefore, be more effective if a country develops specific drug policies to suit regional instead of national requirements.\n\nBODY.COMPETING INTERESTS:\nThe authors declare not to have a conflict of interest. The organizations that provided financial support had no influence in the design, the actual field and laboratory work, analysis and writing.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nPM was involved in the conception of the study, carried out the molecular analysis of the samples, carried out the statistical analysis of the results and drafted the manuscript. PS coordinated the study in the field and was responsible for the clinical examination and treatment of the participants. SvA collected clinical data and blood samples in the field and did the clinical data analysis. SO coordinated the study in the field an arranged the logistics in the field. IV carried out the NASBA assays and the subsequent analysis. HS was involved in the design of the study and contributed to the drafting of the manuscript. PK as involved in the design of the study, critically read and improved the manuscript. All authors read and approved the final manuscript\n\n**Question:** Compared to dihydroartemisinin-piperaquine (AP) what was the result of arthemeter-lumefantrine (AL) on Gametocyte dynamics?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
906
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A dose-ranging study of the bronchodilator effects of abediterol (LAS100977), a long-acting β\n\n ABSTRACT.BACKGROUND:\nLong-acting β2-adrenergic agonists (LABAs) are recommended in combination with inhaled corticosteroids (ICSs) for asthma management. Abediterol is a novel, selective, potent, once-daily LABA in development for treatment of asthma and chronic obstructive pulmonary disease. This study aimed to determine abediterol doses with similar peak bronchodilatory effect to salbutamol 400 μg, and duration of action compatible with once-daily dosing in patients with persistent, stable asthma.\n\nABSTRACT.METHODS:\nThis was a Phase II, randomized, double-blind, double-dummy, crossover, placebo-controlled, dose-ranging study (ClinicalTrials.gov NCT01425801) in 62 patients with mild-to-moderate asthma who were also receiving an ICS. Patients received single doses of abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg, or placebo in the morning. Spirometry was performed up to 36 h post-dose; safety and tolerability were assessed throughout the study. The primary endpoint was change from baseline in peak forced expiratory volume in 1 s (FEV1). Additional endpoints included trough FEV1, normalized area under the FEV1 curve (FEV1 AUC) up to 24 h post-dose, and peak and trough forced vital capacity (FVC).\n\nABSTRACT.RESULTS:\nAbediterol produced dose-dependent improvements in peak FEV1 from baseline compared with placebo, from 0.274 (95% CI 0.221, 0.327) to 0.405 L (95% CI 0.353, 0.458) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001 all doses). Abediterol 0.625, 1.25, and 2.5 μg had similar magnitude of peak FEV1 effect to salbutamol. Dose-dependent changes from baseline in trough FEV1 versus placebo were 0.219 (95% CI 0.136, 0.302) to 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001). All abediterol doses achieved significant improvements versus placebo in FEV1 AUC 0–6, 0–12, and 0–24 h, and peak and trough FVC (p < 0.05). Less than 10% of patients experienced treatment-related adverse events for each dose of abediterol; most were mild to moderate in intensity and the most common were headache and nasopharyngitis. There were no clinically relevant changes in heart rate.\n\nABSTRACT.CONCLUSIONS:\nAbediterol 0.625–2.5 μg provided dose-dependent, clinically and statistically significant bronchodilation versus placebo in patients with asthma, with a peak effect similar to salbutamol and duration of action compatible with once-daily dosing. All doses of abediterol were well tolerated.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/1471-2466-14-176) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nThe use of β2-adrenergic agonist bronchodilators in combination with inhaled corticosteroids (ICSs) to manage the symptoms of asthma is recommended by Global Initiative for Asthma (GINA) guidelines [1]. Currently licensed bronchodilators include salbutamol, a short-acting β2-adrenergic agonist (SABA) administered as required [2], salmeterol and formoterol, two long-acting β2-adrenergic agonists (LABAs) administered twice daily in combination with an ICS [3, 4], and the recently licensed vilanterol, a LABA administered once daily in fixed-dose combination with the ICS fluticasone furoate [5]. There is ongoing interest to develop once-daily medications to further simplify treatment regimens and improve patient compliance [6]. Abediterol is a novel, once-daily LABA in clinical development as a fixed-dose combination with an ICS for the treatment of asthma and chronic obstructive pulmonary disease (COPD) [7]. Abediterol displays high affinity for β2-adrenoceptors, plus a higher functional selectivity for β2-adrenoceptors over β1-adrenoceptors than formoterol, indacaterol, vilanterol and olodaterol in cellular models [7–9]. The potency of abediterol is also higher than salmeterol and indacaterol in isolated human bronchi and, in addition to these two LABAs, vilanterol and olodaterol in animal models [7, 10]. Abediterol has a rapid onset and prolonged duration of action [7, 10]. A Phase II clinical trial in patients with asthma provided results that were consistent with the preclinical data; abediterol resulted in significant improvements in lung function at 5 min post-dose that were greater than improvements seen with salmeterol, and significant bronchodilation was sustained up to 24 h post-dose [11]. These characteristics make abediterol an ideal candidate for once-daily dosing. Moreover, abediterol has a favorable cardiovascular safety and tolerability profile: in anesthetized dogs, abediterol had a lesser effect on heart rate compared with salmeterol, formoterol, and indacaterol at concentrations that produced comparable bronchodilation [7, 12]. In early phase clinical trials, doses of abediterol ≤10 μg were safe and well tolerated and did not demonstrate any clinically relevant effects on heart rate in healthy subjects [13], patients with asthma [14], or patients with COPD [15]. Drug regulatory authorities have concerns that excessively high LABA doses may cause unacceptable side effects [16]. The trough forced expiratory volume in 1 s (FEV1), measured 24 h after dosing, is an important endpoint to assess once-daily LABA efficacy [16]. The maximum effect achieved soon after dosing, otherwise known as the 'peak' effect, is also of interest for efficacy and safety reasons. A very high peak effect may indicate high levels of acute drug exposure that could predispose patients to β-agonist-mediated side effects, such as cardiovascular adverse events, when receiving chronic treatment [17]. Moreover, the ideal pharmacologic characteristic of a once-daily LABA is a low 'peak to trough ratio', ensuring relatively stable lung function throughout the day [18]. The peak effect of salbutamol is well characterized, and causes acute bronchodilation without safety concerns. The bronchodilator response following single-dose administration of salbutamol (4 x 100 μg) is commonly used as one of the diagnostic tests for asthma, as recommended by a joint European Respiratory Society (ERS)/American Thoracic Society (ATS) taskforce [19]. This effect is, therefore, a reasonable benchmark for a novel LABA to achieve and this degree of acute bronchodilation is considered clinically meaningful. A major regulatory agency recommended that the peak effects of abediterol should be compared to those of salbutamol to guide dose selection for further clinical trials in order to avoid a dose that induces a much greater degree of bronchodilation than is necessary for clinical effectiveness, which could increase the potential for safety concerns during chronic treatment. We report a placebo-controlled, dose-ranging, crossover study comparing the acute bronchodilator effects of abediterol to salbutamol in patients with persistent stable asthma. The primary aim of the study was to investigate the peak bronchodilatory effect of abediterol versus placebo and to confirm it was in the same range as that of salbutamol. The full profile of lung function over 24 h, including the trough FEV1, was also measured. The goal of this study was to enable the selection of abediterol doses for future long-term clinical trials, based on a peak effect similar to that of salbutamol 400 μg metered dose inhaler (MDI), coupled with a trough effect compatible with a 24-h duration of action. This trial design was a specific recommendation by a major regulatory agency.\n\nBODY.METHODS.STUDY DESIGN:\nThis was a Phase IIa, randomized, double-blind, double-dummy, 6-way crossover, single-dose, multicenter, dose-ranging study conducted between August 2011 and January 2012. The study was conducted in two sites in the UK and seven in Germany. The study protocol was approved by an independent ethics committee in each country (see Additional file 1) and complied with the Declaration of Helsinki and the International Conference on Harmonisation and Good Clinical Practice guidelines. Patients provided written, informed consent. Following a screening visit and a 12–16 day run-in period to assess clinical stability, participants were randomized to 1 of 6 treatment sequences according to a balanced Williams design for crossover studies (1:1:1:1:1:1). Block randomization was performed by the Sponsor, using a computer-generated schedule to assign a treatment sequence to each patient randomization number. The block size and randomization schedule were not communicated to investigators. Patients received abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg (administered as four inhalations with 100 μg per inhalation), or placebo (at 09:00 h ±1 h) at each visit, with a washout period of 7–14 days. A follow-up telephone call to assess safety was made approximately 2 weeks following the final treatment visit or premature discontinuation. Abediterol and placebo to abediterol were administered via a multidose dry powder inhaler (Genuair®). Salbutamol and placebo to salbutamol were administered via a pressurized MDI (VentolinTM EvohalerTM). In order to preserve study blinding, independent, trained personnel prepared the medication, delivered the device to patients, and collected the device following inhalation.\n\nBODY.METHODS.PATIENTS:\nEligible participants were men or women aged 18–70 years with a clinical diagnosis of persistent asthma (according to GINA guidelines) for ≥6 months prior to screening. Eligible patients had an FEV1 >60% and ≤85% of the predicted normal value, with an FEV1 reversibility ≥12%, and an absolute increase in FEV1 of ≥200 mL over their baseline value following inhalation of salbutamol 400 μg at screening. Comparable FEV1 measurements were required at screening and pre-dose at each visit in order to ensure there were no carry-over effects from previous treatment periods (limit of variability ±20%). At enrollment, all patients were using only an ICS to control their asthma, and had been on a stable dose and regimen for at least 4 weeks prior to screening, up to the equivalent of 1600 μg/day of beclometasone dipropionate, and had not previously been exposed to abediterol. Exclusion criteria included: a smoking history within the past 6 months or >10 pack-years; presence or history of relevant pulmonary disease or any other clinically relevant disease or abnormality as judged by the investigator; difficult-to-treat asthma; any respiratory tract infection ≤6 weeks prior to screening; hospitalization or emergency room treatment for asthma ≤3 months prior to screening; treatment with β2-antagonists; and positive laboratory tests for illicit drugs. The use of anticholinergics, SABAs (other than salbutamol), LABAs, continuous oral or parenteral corticosteroids, methyl-xanthines, cromolyn sodium, nedocromil, leukotriene modifiers, anti-immunoglobulin E, β1-blocking agents, and other investigational drugs was prohibited during the study. Patients using prohibited medications underwent up to 6 weeks' washout before the screening visit. Salbutamol pressurized MDI (100 μg/puff) was the only reliever medication permitted and was used on an as-needed basis and avoided 6 h prior to and during a treatment visit unless absolutely necessary.\n\nBODY.METHODS.EFFICACY ASSESSMENTS:\nPulmonary function tests were conducted at screening, pre-dose, and at 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, 14, 23, 24, and 36 h post-dose at each treatment visit using standardized spirometers at each investigational site (eResearchTechnology, Philadelphia, PA, USA). Prior to the first spirometry measure, the technician demonstrated the procedure and allowed the patient two practice attempts. At each study time point, the maneuvers were repeated until three technically adequate measurements were made according to ERS/ATS guidelines for acceptability and repeatability [20], with a maximum of eight attempts allowed.\n\nBODY.METHODS.SAFETY AND TOLERABILITY ASSESSMENTS:\nTreatment-emergent adverse events (TEAEs), including serious adverse events, were recorded throughout the study and follow-up period. A physical examination was performed at screening and at 36 h post-dose following the last treatment period (Visit 6), and blood pressure and 12-lead electrocardiogram measures were made at screening, pre-dose, and at 0.75, 2, 6, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6. Clinical laboratory tests (blood chemistry, hematology, urinalysis, and a serum pregnancy test) were performed at screening and at 36 h post-dose at Visit 6. Blood glucose and serum potassium levels were assessed at screening, pre-dose, and at 4, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6.\n\nBODY.METHODS.STATISTICS:\nThe primary efficacy endpoint was change from baseline in peak FEV1 (defined as the highest FEV1 value observed within 4 h of the administration of the investigational medicinal product [IMP]) on Day 1. Secondary endpoints included change from baseline in peak forced vital capacity (FVC) on Day 1, trough FEV1 (defined as the highest average of the maneuvers performed at 23–24 h post IMP administration) and FVC on Day 2, change from baseline in FEV1 and FVC at each time point up to 36 h post-dose, and change from baseline in normalized FEV1 area under the curve (AUC; defined as the area between baseline FEV1 and the FEV1 curve, from 0 to × hours divided by × hours) at 0–6, 0–12, and 0–24 h post-dose. In accordance with the GINA guidelines, a change in FEV1 of 200 mL was considered clinically relevant [1]. A population of 48 patients was necessary to achieve 80% power to detect a 150 mL change from baseline in peak FEV1 for any dose of abediterol compared with placebo. Assuming a 15% withdrawal rate, the protocol required at least 54 patients to be randomized. All reported efficacy assessments were analyzed in the intention-to-treat population, defined as all randomized patients who took at least one dose of the IMP (placebo, abediterol, or salbutamol) and had at least one baseline and one post-dose FEV1 value for at least one treatment visit. Safety analyses are reported in the safety population, which consisted of all randomized patients who took at least one dose of the IMP. All statistical analyses were performed using Statistical Analysis System (SAS Institute Inc., Cary, NC, USA) version 9.1.3 software. Efficacy variables were analyzed using an analysis of covariance model for crossover designs, with fixed-effect factors of sequence, treatment, and period, patient within sequence as a random effect, and baseline FEV1 or FVC as a covariate, as appropriate. Between-treatments comparisons were performed by means of contrasts on the treatment factor. The difference between treatments was estimated by the difference between the least squares means. Statistical comparisons between active treatments and placebo were two-sided hypothesis tests with a significance level of 0.05.\n\nBODY.RESULTS.STUDY POPULATION:\nA total of 115 patients were screened, with 62 patients randomized (40/62 [64.5%] male, mean age 40.9 ± 11.2 years). Patient demographics are presented in Table 1. A diagram of patient flow is presented in Figure 1. All 62 patients were included in the intention-to-treat and safety populations. One protocol violation occurred when a patient was inadvertently given the treatment scheduled for Visit 6 at Visit 5, and vice versa. Data for this patient were analyzed according to the treatments received. Four patients withdrew or were discontinued from the study: two due to inability to comply with the study schedule, one who did not fulfill stability criteria, and one for unknown reasons (patient did not return for the premature discontinuation visit).Table 1\nPatient demographics at baseline: safety population\nPatients (N = 62)Age, mean no. years (SD)40.9 (11.2)Male gender, n (%)40 (64.5)Race, n (%) White58 (93.5) Other4 (6.5)Cigarette smoking status, n (%) Former smoker21 (33.9) Never smoked41 (66.1)Smoking history, mean no. pack-years (SD)4.6 (3.3)Asthma duration, mean no. years (SD)25.5 (13.2)Asthma severity, n (%) 60% ≤ FEV1 < 80% of predicted46 (74.2) FEV1 ≥80% of predicted16 (25.8)Bronchial reversibilitya, mean % (SD)21.3 (8.2)FEV1 absolute reversibilityb, mean L (SD)0.568 (0.261)\nacalculated as 100 x (FEV1 [post-bronchodilator] – FEV1 [pre-bronchodilator]/FEV1 [pre-bronchodilator]); bcalculated as FEV1 (post-bronchodilator) – FEV1 (pre-bronchodilator).FEV1, forced expiratory volume in 1 s; SD, standard deviation.Figure 1\nDiagram of patient flow.\n\n\nBODY.RESULTS.EFFICACY:\nMean baseline FEV1 values were similar across all treatment periods, ranging from 2.63 L to 2.69 L. The full profile of lung function after each treatment is shown in Figure 2. The onset of action of abediterol was rapid, with significant bronchodilation compared with placebo observed at the first study time point (15 min) post-dose (p < 0.0001 for all doses of abediterol versus placebo, Figure 2a). All doses of abediterol produced significantly greater improvements in peak FEV1 from baseline compared with placebo (p < 0.0001 for all doses, Table 2), and the magnitude of effect of abediterol 0.625, 1.25, and 2.5 μg was not significantly different from salbutamol 400 μg (Figure 3, Table 3). Least squares means differences in change from baseline in peak FEV1 versus placebo were 0.274, 0.322, 0.371, and 0.405 L for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively, and 0.353 L for salbutamol 400 μg (Table 3). Median time to peak FEV1 was 3 h post-dose for abediterol (all doses), 2 h post-dose for placebo, and 1 h post-dose for salbutamol 400 μg.Figure 2\nChange from baseline in FEV\n1\nover time. Data are presented as least squares means with 95% confidence intervals: a) p < 0.0001 versus placebo for salbutamol and all doses of abediterol at all time points; b) p < 0.0001 for all doses of abediterol 0–24 h post-dose and abediterol 1.25 and 2.5 μg 36 h post-dose versus placebo, p = 0.002 and p = 0.01 for abediterol 0.313 and 0.625 μg, respectively, at 36 h post-dose versus placebo. FEV1, forced expiratory volume in 1 s. Table 2 \n Change from baseline in pulmonary function parameters (L): intention-to-treat population \n Placebo N = 59 Salbutamol 400 μg N = 58 Abediterol 0.313 μg N = 60 Abediterol 0.625 μg N = 60 Abediterol 1.25 μg N = 60 Abediterol 2.5 μg N = 61 Peak FEV 1 \n 0.202 (0.045) 0.555**** (0.045) 0.477**** †† (0.045) 0.524**** (0.045) 0.573**** (0.045) 0.608**** (0.045) Trough FEV 1 \n a \n -0.054 (0.039) -0.076 (0.039) 0.166**** †††† (0.039) 0.205**** †††† (0.039) 0.278**** †††† (0.039) 0.346**** †††† (0.039) Normalized FEV 1 AUC 0–6 \n 0.088 (0.039) 0.347**** (0.039) 0.354**** (0.038) 0.403**** † (0.038) 0.461**** †††† (0.038) 0.496**** †††† (0.038) Normalized FEV 1 AUC 0–12 \n 0.063 (0.040) 0.219**** (0.040) 0.348**** †††† (0.039) 0.393**** †††† (0.039) 0.455**** †††† (0.039) 0.496**** †††† (0.040) Normalized FEV 1 AUC 0–24 \n 0.007 (0.039) 0.100** (0.039) 0.282**** †††† (0.039) 0.321**** †††† (0.038) 0.390**** †††† (0.038) 0.446**** †††† (0.039) Peak FVC 0.222 (0.034) 0.361**** (0.034) 0.317** (0.034) 0.329*** (0.034) 0.361**** (0.034) 0.380**** (0.034) Trough FVC a \n -0.028 (0.035) -0.065 (0.035) 0.094** †††† (0.035) 0.050* †† (0.035) 0.111*** †††† (0.035) 0.120*** †††† (0.035) Data are least squares means (standard error). \n a trough values are the mean of the 23 and 24 h post-dose measurements. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus placebo; † p < 0.05, †† p < 0.01, †††† p < 0.0001 versus salbutamol 400 μg (analysis of covariance). AUC, area under the FEV 1 curve; FEV 1 , forced expiratory volume in 1 s; FVC, forced vital capacity. Figure 3\nChange from baseline in peak FEV\n1\nat Day 1. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol and salbutamol 400 μg; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Table 3 \n Comparison of peak FEV \n 1 \n for abediterol versus placebo and abediterol versus salbutamol \n Least–squares mean difference (standard error) 95% confidence interval Abediterol 2.5 μg vs. placebo 0.405 (0.027) 0.353, 0.458 Abediterol 1.25 μg vs. placebo 0.371 (0.027) 0.318, 0.424 Abediterol 0.625 μg vs. placebo 0.322 (0.027) 0.269, 0.375 Abediterol 0.313 μg vs. placebo 0.274 (0.027) 0.221, 0.327 Salbutamol 400 μg vs. placebo 0.353 (0.027) 0.299, 0.406 Abediterol 2.5 μg vs. salbutamol 400 μg 0.0529 (0.0270) -0.0002, 0.1061 Abediterol 1.25 μg vs. salbutamol 400 μg 0.0181 (0.0269) -0.0350, 0.0711 Abediterol 0.625 μg vs. salbutamol 400 μg -0.0308 (0.0269) -0.839, 0.0223 Abediterol 0.313 μg vs. salbutamol 400 μg -0.0783 (0.0269) -0.1313, -0.0253 Data included for comparisons of all doses of abediterol with placebo and with salbutamol. Statistically significant increases in FEV1 were sustained at 24 h post-dose compared with placebo (p < 0.0001 for all doses of abediterol 0–24 h post-dose, Figure 2b). Abediterol significantly improved trough FEV1 from baseline compared with placebo (p < 0.0001 for all doses; Figure 4). Least squares means differences in change from baseline in trough FEV1 versus placebo were 0.219 L (95% CI 0.136, 0.302), 0.259 L (95% CI 0.176, 0.342), 0.332 L (95% CI 0.249, 0.415), and 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively (Table 2). The effects of salbutamol on FEV1 were not sustained after 6 h post-dose, as expected for a SABA (Figure 2b).Figure 4\nChange from baseline in trough FEV\n1\nat Day 2. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Change from baseline in normalized FEV1 AUC0–6, FEV1 AUC0–12, and FEV1 AUC0–24 was significantly higher for all doses of abediterol versus placebo (p < 0.0001 for all, Table 2), with dose-related effects observed. The change from baseline in normalized FEV1 AUC0–6 was significantly greater for abediterol 0.625–2.5 μg versus salbutamol 400 μg (p < 0.05, Table 2).\n\nBODY.RESULTS.EFFICACY.FVC:\nMean baseline FVC values were comparable across treatment periods, ranging from 4.23 L to 4.30 L. Abediterol produced significantly greater change from baseline in peak FVC compared with placebo (p < 0.01 for all doses, Table 2). The change from baseline in peak FVC was similar to that observed with salbutamol. Additionally, abediterol significantly improved trough FVC from baseline compared with placebo (p < 0.05 for all doses, Table 2). All doses of abediterol significantly (p < 0.05) increased FVC versus placebo up to 24 h post-dose except for the 0.625 μg dose at 12 and 23 h.\n\nBODY.RESULTS.SAFETY AND TOLERABILITY:\nThe percentage of patients experiencing TEAEs was similar across all treatment groups (Table 4). Most TEAEs were mild to moderate in intensity, no patient discontinued the study due to a TEAE, and there were no serious adverse events or deaths. The most common TEAEs were headache and nasopharyngitis, reported by 41.9 and 27.4% of patients, respectively. The percentage of patients that experienced at least one treatment-related TEAE was <10% for all doses of abediterol, and was similar to that of salbutamol and placebo. No clinically significant changes in serum glucose or potassium concentrations were observed over time in any of the treatments groups (Table 5). Minor variations in heart rate over time were observed for all treatment groups, including placebo; none were considered to be dose related or clinically relevant (Table 5). There were no clinically relevant changes in clinical laboratory tests, blood pressure, or electrocardiogram parameters.Table 4\nTEAEs occurring in ≥2% of patients in any treatment group: safety population\nNumber (%) of patients reporting TEAEsPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Total N = 62Any TEAE16 (27.1)12 (20.7)18 (30.0)16 (26.7)18 (30.0)23 (37.7)51 (82.3) Headache7 (11.9)6 (10.3)4 (6.7)7 (11.7)8 (13.3)7 (11.5)26 (41.9) Nasopharyngitis1 (1.7)08 (13.3)3 (5.0)4 (6.7)2 (3.3)17 (27.4) Chest discomfort2 (3.4)01 (1.7)002 (3.3)5 (8.1) Wheezing1 (1.7)2 (3.4)1 (1.7)01 (1.7)04 (6.5) Oropharyngeal pain01 (1.7)01 (1.7)03 (4.9)4 (6.5) Cough2 (3.4)001 (1.7)003 (4.8)TEAE, treatment-emergent adverse event.Table 5\nChange in clinical safety parameters from pre-dose values following treatment: safety population\nPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Serum glucose (mmol/L) 4 hours post-dose-0.37 (0.56)-0.24 (0.55)-0.19 (0.71)-0.27 (0.70)-0.16 (0.94)-0.23 (0.74) 24 hours post-dose0.01 (0.38)0.00 (0.33)0.13 (0.51)0.06 (0.62)0.11 (0.45)0.15 (0.59)Serum potassium (mmol/L) 4 hours post-dose-0.01 (0.36)-0.09 (0.38)-0.01 (0.35)0.10 (0.37)0.02 (0.36)-0.04 (0.35) 24 hours post-dose-0.02 (0.37)0.04 (0.34)0.05 (0.33)0.15 (0.32)0.04 (0.28)0.05 (0.38)Heart rate (bpm) 0.75 h post-dose-0.6 (9.1)2.0 (10.1)-0.7 (6.1)-0.8 (7.5)-1.5 (9.0)-0.8 (6.2) 2 h post-dose-3.4 (8.7)-1.0 (8.0)-2.0 (6.6)-3.4 (8.1)-4.3 (8.4)-3.2 (6.7) 6 h post-dose5.1 (10.2)7.2 (7.7)6.2 (7.1)6.2 (8.0)6.4 (9.8)5.3 (7.2) 24 h post-dose-2.1 (9.8)-1.9 (8.0)-1.6 (6.7)-0.8 (8.4)-0.9 (8.7)-0.6 (7.6)Data are mean (standard deviation).bpm, beats per minute.\n\nBODY.DISCUSSION:\nIn this study, abediterol 0.313, 0.625, 1.25, and 2.5 μg provided clinically and statistically significant improvements in peak bronchodilation versus placebo in patients with persistent, stable asthma. Abediterol doses ≥0.625 μg also caused bronchodilation compatible with once a day administration, as improvements in trough FEV1 >250 mL compared with placebo were observed. The peak bronchodilatory effect of abediterol was similar to that of salbutamol 400 μg for all doses ≥0.625 μg. A mean increase in peak FEV1 >200 mL is considered clinically relevant in asthma [1]. This magnitude of change was exceeded for all doses of abediterol, including the lowest dose, 0.313 μg. The primary objective of this study was to define abediterol doses that caused peak bronchodilation similar to salbutamol, the most widely prescribed SABA [21]. High doses of β2-agonists may increase the degree of bronchodilation but also can cause excessive systemic side effects through systemic absorption [22]. However, the current study demonstrates that the doses of abediterol from 0.625 to 2.5 μg induce acute bronchodilation similar to salbutamol, and therefore treatment with higher doses may not be warranted. Significant bronchodilation compared with placebo was achieved with a range of abediterol doses from 0.313 to 2.5 μg, consistent with the high potency of abediterol demonstrated in preclinical studies that suggested a potential for significant bronchodilation in vivo[7]. Abediterol has higher potency than formoterol, salmeterol, indacaterol, olodaterol, and vilanterol in anesthetized guinea pigs [7, 10] as well as higher potency than salmeterol and indacaterol with comparable potency to formoterol in isolated human bronchi [7]. Based on the results of this preliminary dose-ranging study and the guidelines published by Chowdhury et al. [16], the clinically effective dose in asthma is considered most likely to be in the range of 0.625–2.5 μg. Significant bronchodilation compared with placebo was observed for salbutamol and all doses of abediterol at the first-assessed time point (15 min) post-dose; this indicates a rapid onset of action for both salbutamol and abediterol. Salmeterol and formoterol, both twice-daily LABAs, are widely used as maintenance therapy in COPD and asthma due to their longer durations of action compared with salbutamol [6, 18]. In clinical studies in patients with asthma, the onset of action of formoterol 24 μg is consistently faster than that of salmeterol 50 μg, with a median onset of action of 3.6 min compared with 31.0 min in one study [23, 24]. In vitro experiments in isolated human bronchi suggest that the onset of action of abediterol is faster than salmeterol and is not significantly different from formoterol and indacaterol [7]. The prolonged duration of action of abediterol, with significant improvements in trough FEV1, normalized FEV1 AUC0–24, and trough FVC relative to placebo, and significant bronchodilation maintained up to 36 h post-dose is consistent with the results of previous clinical studies in healthy subjects and patients with COPD or asthma [11, 13, 25]. For example, a recent Phase II clinical study in patients with COPD demonstrated that abediterol 2.5, 5, and 10 μg achieved >200 mL increases in FEV1 at 24 h, and that this bronchodilatory effect was greater than that of indacaterol 150 μg, and the duration of action was similar [25]. Taken together, these results are compatible with a once-daily dosing regimen in asthma and COPD. The convenience of once-daily dosing is likely to improve compliance in patients with asthma, leading to better patient outcomes [26, 27], as shown in various studies, including that of Wells et al. whereby once-daily ICS led to improved compliance in patients with asthma compared with ICS administered multiple times daily [28]. LABA monotherapy treatment is contra-indicated in asthma; co-administration with an ICS is mandated, preferably as a fixed-dose combination to reduce the risk of monotherapy administration. Development of once-daily LABAs for use in fixed-dose combinations with once-daily ICS is important for simplifying treatment regimens and, therefore, helping improve patient compliance and general asthma management. The safety and tolerability profile of abediterol in this study is comparable to that reported in other studies in the abediterol development program [11–13, 25]. Single doses of abediterol 0.313–2.5 μg were safe and well tolerated, and the TEAEs were consistent with the known safety profile of β2-agonists.\n\nBODY.CONCLUSION:\nAbediterol 0.625–2.5 μg demonstrated a peak bronchodilatory response similar to that of a high dose of salbutamol (400 μg) at very low doses, and showed a clear dose–response relationship. Abediterol is a promising new, highly potent, LABA with a sustained duration of action and a favorable safety and tolerability profile. These results suggest that further studies are warranted to further evaluate the efficacy and safety of abediterol in combination with ICS in both asthma and COPD.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nAdditional file 1:\nIndependent Ethics Committees (IEC).\n(DOCX 17 KB)\n\n**Question:** Compared to placebo what was the result of Abediterol on Change from baseline in normalized FEV1 AUC0–6, FEV1 AUC0–12, and FEV1 AUC0–2?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
909
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Assessing Calcium Intake in Postmenopausal Women\n\n ABSTRACT.INTRODUCTION:\nBecause foods fortified with calcium are increasingly available, the calcium content of calcium-fortified foods may not be adequately captured in traditional assessments of dietary intake, such as dietary records analyzed with commercially available software. The primary objective of our study was to design and test a calcium-focused food frequency questionnaire (CFFFQ) including foods naturally rich in calcium and calcium-fortified foods. Secondary objectives were to review calcium sources and adequacy of intake in black and in white postmenopausal women.\n\nABSTRACT.METHODS:\nWe studied a convenience sample of 46 black and 139 white postmenopausal women (mean [SD] age 69.4 [5.8] years). Subjects completed a multiple-pass interview for 24-hour recall of foods eaten and the 46-item CFFFQ.\n\nABSTRACT.RESULTS:\nThe correlation between measures for total daily calcium intake was moderately strong (r = 0.53, P < .001). The CFFFQ estimated greater total daily calcium intake than did the 24-hour recall (mean [SD], 1,021 [624] mg/d vs 800 [433] mg/d, P < .001). As daily calcium intake increased, the 24-hour recall increasingly underreported calcium (r = 0.41, P < .001) compared with the CFFFQ. Cross-tabulation and Χ2 analyses found that the CFFFQ had greater specificity for lower calcium intakes. For calcium classified by food groups, there was moderate correlation for dairy (r = 0.56, P < .001) and fruits (r = 0.43, P < .001). The CFFFQ overestimated mean total calcium compared with the 24-hour recall by 221 mg/d (P < .001), including within racial groups (195 mg/d for black women, P = .04, and 229 mg/d for white women, P < .001). Dairy was the primary calcium source for both groups (55% of intake for black women and 57% of intake for white women).\n\nABSTRACT.CONCLUSION:\nThe CFFFQ can be used to identify postmenopausal women with inadequate calcium intakes (<800 mg/d) and to identify key sources of dietary calcium. Older black women consume less daily calcium than do older white women.\n\nBODY.INTRODUCTION:\nAs the prevalence of osteoporosis continues to increase, interventions targeting modifiable risk factors receive more emphasis from researchers, clinicians, and public health professionals. Although many nutrients are important to bone health, interventions largely focus on calcium intake (1). Clinicians often prescribe a calcium supplement but pay limited attention to dietary sources of calcium intake (2,3). Other interventions may target only dairy calcium (4,5). However, calcium fortification of foods broadens the possible sources of calcium so that adequate calcium intake may be achieved through a diverse diet. Sources of dietary calcium may differ across cultures and ethnicities according to food preferences and tolerances. For example, blacks and Asians have a higher prevalence of lactose intolerance, which may lead to reduced dairy intake, but also have cultural food preferences, including not drinking milk at meals, that affect calcium intake (6). White women eat more cheese and drink more milk than do black or American Indian women (7). However, research specifically targeting calcium intake and food source is scarce. Previous research from our group investigating the role of socioeconomic status on calcium intake determined that black women consumed fortified grain products more frequently than did white women (8). Because fortified foods are increasingly available (9), the calcium content of calcium-fortified foods may not be adequately captured in traditional assessments of dietary intake, such as dietary records analyzed with commercially available software. The primary aim of this study was to develop and evaluate the feasibility of a calcium-focused food frequency questionnaire (CFFFQ) that incorporates both natural and fortified sources of calcium. Because determining calcium intake has become more difficult as food fortification has become more common, this study compared 2 methods: an interview for 24-hour recall of calcium intake and the CFFFQ. Although supplements can be an important source of calcium, this project focused on food-derived calcium. Secondary aims were to compare food source of calcium between black and white women aged 60 years or older and evaluate the adequacy of calcium intake as assessed using the 24-hour recall and the CFFFQ. Postmenopausal women were studied because they are at high risk for osteoporotic fractures (1).\n\nBODY.METHODS.PARTICIPANTS:\nIn 2005, we recruited a convenience sample of postmenopausal women aged 60 to 80 years who were enrolled in a parent study in the Urbana-Champaign area of Illinois (10) (assessing body composition, bone health, physical activity, physical function, and self-efficacy) to complete additional nutritional assessments. Exclusion criteria for the parent study were neurologic illness, orthopedic limitations, or cognitive limitations that precluded completion of all study testing procedures. All those in the parent study were invited to participate in the current study of evaluation of calcium intake. The human subjects institutional review board of the University of Illinois approved the research protocol, and all participants completed an informed consent form before data collection began.\n\nBODY.METHODS.CALCIUM INTAKE:\nThe CFFFQ and a self-reported 24-hour recall were completed on the same day to assess calcium intake and food source. The CFFFQ is a 46-item food frequency survey focusing on calcium-rich foods. Foods on the questionnaire were chosen based on calcium content and those found in previous studies to be commonly consumed. Foods on the CFFFQ were categorized and presented according to representative food groups: dairy (6 foods), foods with dairy (5 foods), fruits (6 foods), vegetables (3 foods), grains (10 foods), meats (8 foods), and other foods (8 foods) (Appendix). Inclusion of fortified foods was based on availability for purchase in area grocery stores (both local and chain stores). Fortified foods were noted as such on the CFFFQ. A standard portion size was included for each specific food item. Participants entered quantity of food relative to daily, weekly, monthly, or yearly consumption. \"Not eaten\" was given as an option. We entered foods and servings into a database (Microsoft Access 2003, Redmond, Washington) that calculated calcium content on a daily basis. Double entry was used for quality assurance. Participants met with researchers to complete the 24-hour recall. The researchers used a multiple-pass interview style to elicit complete information (11). Foods from the 24-hour recall were categorized into the same food groups as on the CFFFQ and were then entered in Nutritionist Pro version 2.3 (First DataBank, San Bruno, California) to obtain calcium values per day.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nWe analyzed the data by using SPSS version 14.0 (SPSS, Inc, Chicago, Illinois). On the basis of the findings of the Kolmogorov-Smirnov test, the primary variables of interest were not normally distributed. Nonparametric tests were subsequently used to evaluate group or intake assessment differences (Spearman rank correlation and Mann-Whitney). We assessed the adequacy of calcium intake by comparing these 2 assessment methods to two-thirds of the Institute of Medicine's (12) adequate intake amount (used to determine inadequate intake) for women aged 51 years or older. Comparison of the adequacy classification as a measure of specificity of each calcium intake method was completed by using cross-tabulation to demonstrate the relationship between the 2 variables. Chi-square analysis was performed on the cross-tabulated variables. Significance was defined as α = .05.\n\nBODY.RESULTS:\nOf 245 eligible women from the parent study, 185 chose to participate in our study. Of the 185, 46 were black (mean [SD] age, 68.0 [4.85] years) and 139 were white (70.0 [6.0] years).\n\nBODY.RESULTS.COMPARISON OF CFFFQ TO 24-HOUR RECALL:\nCalcium from the CFFFQ was significantly correlated with 24-hour recall for all food groups (P < .001) except for vegetables (P = .08) (Table 1). Estimates obtained with the 24-hour recall method were significantly lower than CFFFQ estimates of total daily calcium, dairy, foods with dairy products, fruits, and vegetables (Table 2). The CFFFQ estimate of mean total calcium compared with the 24-hour recall was higher by 221 mg/d (P < .001), including within racial groups (195 mg/d for black women, P = .04, and 229 mg/d for white women, P < .001). There was no significant difference between the CFFFQ and 24-hour recall methods for calculated calcium intake for grains, meats, and \"other\" foods. This pattern was consistent for the total group and for black and white women separately (Table 2). The Bland-Altman plot illustrates the lack of agreement between methods of assessment for total daily intake (Figure). The positive correlation between the lack of agreement (error score on the Y-axis) and the daily intake indicates that for increasing levels of daily calcium intake, the 24-hour recall of calcium intake increasingly underestimated calcium intake compared with the CFFFQ (r = 0.41, P < .001). Figure.Scatterplot of agreement between the calcium-focused food frequency questionnaire (CFFFQ) and an interview for 24-hour recall of calcium intake. Estimated calcium difference is the difference in calcium intake found between the CFFFQ and the 24-hour recall. Average calcium intake is the average of calcium intake between the 2 methods. Open circles represent white women (n = 139) and closed circles represent black women (n = 46). The parallel solid and the broken lines represent the mean difference (2 SD) for all participants (221 [528] mg).The figure shows the lack of agreement between the 2 methods of assessing total daily intake. P < .001, r = 0.41.\n\nBODY.RESULTS.CALCIUM ADEQUACY ASSESSMENT:\nThe prevalence of inadequate intake (<800 mg) was 56% (n = 103) using the 24-hour recall and 45% (n = 83) using the CFFFQ method. Examining the cases where intake would be inadequate as measured by both tools, 64 (35%) women would be classified as having inadequate calcium intake. However, 39 (21%) women whose intake would be classified as adequate by the CFFFQ would be classified as inadequate by the 24-hour recall method. Only 19 (10%) who would be classified as having inadequate intake by the CFFFQ would have adequate intake as classified by the 24-hour recall. Compared with the self-reported 24-hour recall, the CFFFQ indicates more specificity for lower intakes.\n\nBODY.RESULTS.COMPARISON OF INTAKE AND SOURCE BETWEEN BLACK AND WHITE WOMEN:\nRegardless of dietary assessment method used, white women had higher calcium intakes than black women. When using the CFFFQ, white women reported consuming approximately 43% more calcium than did black women (mean [SD] 1,104 [632] mg for white women vs 768 [531] mg for black women, P < .001). When using the 24-hour recall method, mean calcium intake for white women was approximately 52% higher than intake for black women (875 [429] mg vs 573 [365] mg, P < .001). From the CFFFQ, the primary calcium source was dairy products (55% for black women and 57% for white women). White women obtained more calcium (630 [423] mg) from dairy than did black women (424 [373] mg, P < .004). Grains were the second highest calcium source, although grains provided a much lower percentage than dairy (13% of total calcium for each racial group). Calcium from grains primarily came from fortified foods. Dairy was also the primary calcium source when the 24-hour recall data were analyzed, and a significant difference in mean (SD) dairy calcium intake between racial groups was found (243 [273] mg for black women, 444 [360] mg for white women, P < .001).\n\nBODY.DISCUSSION:\nCalcium intake has received increased attention in the last decade because of its role in bone health and as a modifiable risk factor for osteoporosis. The number of calcium-fortified food products being developed and marketed has increased substantially (9). Although labeling for \"excellent\" (>200 mg/serving) and \"good\" (100-200 mg/serving) sources of calcium are regulated by the Food and Drug Administration, there is no federal regulation regarding which foods can be fortified with calcium or the degree of fortification (13). Together, these factors make discerning calcium intake difficult for researchers, clinicians, dietitians, and consumers. The primary findings of this study are that the CFFFQ identifies low calcium intakes and identifies key sources of calcium, including calcium-fortified foods. Other studies have included either no calcium-fortified foods (14-18), calcium-fortified mineral water (19), calcium-fortified juice alone (20), calcium-fortified juice and a grain product (21), or did not provide details regarding the inclusion of calcium-fortified foods in the assessment tool (5,7,22-26). In contrast, the CFFFQ is unique in providing 14 fortified food choices, including 1 in fruits, 8 in grains, and 5 in \"other\" food categories. Fortification of foods can contribute greatly to total calcium intake. From our data, inclusion of calcium-fortified foods in the diet has the potential to increase total daily calcium intake by 1,000 mg or more. Because calcium fortification of food products is not federally regulated, food items being fortified and levels of fortification vary at the discretion of the manufacturer. The lack of regulation may hinder the role that these foods can play in dietary calcium intake by affecting the consistency of the calcium content of the product. The second unique feature of this study is the categorization of calcium intake estimates into food groups. Although total calcium intake is the primary focus in nutritional assessment studies, the effectiveness of nutrition education and osteoporosis interventions can be enhanced by focusing on calcium intake from the food groups the participants usually get their calcium. This information can be used to design more realistic and targeted nutrition education messages. In our study, the largest percentage of total calcium for both white and black women came from dairy, followed by grains and fruits. Although the grain category contained several fortified food items, the fruit category contained only 1 fortified food item. Even though consumption of fortified foods was assessed, dairy was still the primary calcium source found in this study. An analysis (15) of food intake from the US Department of Agriculture's 1994-1996, 1998 Continuing Survey of Food Intakes by Individuals (CSFII) found that dairy contributed 42% of total calcium intake. A higher percentage of dairy contribution would have been expected from that study because it included no competing calcium-fortified foods. In addition, the CSFII reported 21% of total calcium from calcium-rich mixed foods (ie, foods with 2 or more items with calcium such as sandwiches or pizza). In our study, this category contributed only 4% to 5%. Ward et al (21) also reported that most calcium intake was derived from dairy, but a comparable calcium-rich mixed foods group is not identified. The food frequency assessment tool used by Ward et al (21) also ranked the fruits and vegetables group and grains as contributing 11% to 15% of total calcium, whereas the diet records used in the study estimated this contribution at 6% to 13%. Cook et al (15) included no calcium-fortified foods, and the Ward et al (21) study was limited to calcium-fortified juice and 2 grain products. There is a scarcity of information in the literature regarding calcium intake for black and white women. In our study, mean total calcium intake among black women did not meet the adequate intake value whereas it did among white women. Other studies conclude that calcium intake is greater in white women than black women (27,28). However, an article published from the parent study of our study using the same CFFFQ found no difference in total, daily, dietary, or supplemental calcium (8). In that analysis, black and white women were matched on age, socioeconomic status, and education level (n = 33 per group). The findings suggest that racial differences in calcium intake are somewhat influenced by socioeconomic status and education level. Similarly, a comparison between black, white, and American Indian women also reported no significant difference in dietary calcium between black and white women (7). In our study, grains were the second-highest calcium source for both black and white women. Grains are predominately calcium-fortified sources. This is consistent with findings from our previous work, which determined that dairy was the greatest source of calcium but more for white women, whereas black women consumed greater amounts of calcium-fortified grains (8). The prominent role of calcium-fortified foods in these studies reinforces the need for these sources to be carefully evaluated when measuring calcium intake.\n\nBODY.DISCUSSION.LIMITATIONS:\nThe values from the CFFFQ were typically higher than those from the 24-hour recall, with the exception of vegetables and \"other\" foods. This finding may be due in part to the limitations of using a 24-hour recall. Women may have identified foods in the CFFFQ but did not happen to consume those foods on the specific day of the recall, except for vegetables. However, Chee et al (24) found a similar trend in comparing a calcium-rich food frequency questionnaire with a 3-day diary for postmenopausal Malaysian women. Assessing dietary intake by any method has inherent limitations. Validating food frequency questionnaires can be complicated without biomarkers for comparison. Because most nutrients do not yet have reliable biomarkers, 24-hour recalls or food records are the usual standard instruments. Correlations between these methods are considered adequate within the range of 0.4-0.7 (29). Although most of the correlations between total calcium and calcium from food groups when comparing CFFFQ to 24-hour recalls are significant, only total calcium intake, dairy calcium intake, and fruits fall within this acceptable statistical range. Differences in reported calcium intake when comparing the CFFFQ and 24-hour recall could be attributed to a lack of consumer awareness of calcium fortification when responding to the 24-hour recall and a positive respondent bias on the CFFFQ. For example, participants could have responded more positively concerning calcium-fortified foods if consumption of that food was seen as a positive health behavior. Because calcium-fortified foods can essentially double calcium intake, consumer awareness of their own calcium-fortified food product consumption and interviewer probe for each of these food items are essential for an accurate estimate of calcium intake. To assist both the researcher and clinician, software for dietary assessment needs to be updated to include the calcium-fortified foods. Several food frequency questionnaires that assess calcium intake have shown strong correlation with food actually consumed (19,22-25,30,31). However, many studies report correlations but no difference between means or report findings regarding only total calcium intake and not calcium intake by individual food groups. In addition, many rapid assessment tools include either no or limited sources of calcium-fortified foods, and these foods may or may not be probed for on 24-hour recalls.\n\nBODY.DISCUSSION.CONCLUSIONS:\nOur results suggest that the CFFFQ could be used to determine inadequate intakes of calcium. Primary calcium sources for all women were dairy followed by grains, and calcium intake was higher in white women than in black women. The CFFFQ can be used to more accurately identify calcium intakes and usual calcium source, which is of interest because of the increasing availability of calcium-fortified foods. Calcium-fortified sources can go unreported in 24-hour recalls and diaries, leading to an underestimation of calcium intake. When 24-hour recalls are used to assess and quantify calcium intake, researchers and clinicians need to clarify with clients whether the foods they ate were fortified. The CFFFQ may be used to better quantify calcium intake, including both natural and fortified sources. Accurate assessment of calcium is critical in evaluating bone health risks. This information can aid in the development of effective interventions to increase calcium intake.\n\n**Question:** Compared to 24-hour recall what was the result of calcium-focused food frequency questionnaire (CFFFQ) on Estimation of total daily calcium intake?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
918
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Haptoglobin Phenotype, Preeclampsia Risk and the Efficacy of Vitamin C and E Supplementation to Prevent Preeclampsia in a Racially Diverse Population\n\n ABSTRACT:\nHaptoglobin's (Hp) antioxidant and pro-angiogenic properties differ between the 1-1, 2-1, and 2-2 phenotypes. Hp phenotype affects cardiovascular disease risk and treatment response to antioxidant vitamins in some non-pregnant populations. We previously demonstrated that preeclampsia risk was doubled in white Hp 2-1 women, compared to Hp 1-1 women. Our objectives were to determine whether we could reproduce this finding in a larger cohort, and to determine whether Hp phenotype influences lack of efficacy of antioxidant vitamins in preventing preeclampsia and serious complications of pregnancy-associated hypertension (PAH). This is a secondary analysis of a randomized controlled trial in which 10,154 low-risk women received daily vitamin C and E, or placebo, from 9-16 weeks gestation until delivery. Hp phenotype was determined in the study prediction cohort (n = 2,393) and a case-control cohort (703 cases, 1,406 controls). The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death. Preeclampsia was a secondary outcome. Odds ratios were estimated by logistic regression. Sampling weights were used to reduce bias from an overrepresentation of women with preeclampsia or the primary outcome. There was no relationship between Hp phenotype and the primary outcome or preeclampsia in Hispanic, white/other or black women. Vitamin supplementation did not reduce the risk of the primary outcome or preeclampsia in women of any phenotype. Supplementation increased preeclampsia risk (odds ratio 3.30; 95% confidence interval 1.61–6.82, p<0.01) in Hispanic Hp 2-2 women. Hp phenotype does not influence preeclampsia risk, or identify a subset of women who may benefit from vitamin C and E supplementation to prevent preeclampsia.\n\nBODY.INTRODUCTION:\nPreeclampsia affects 2–7% of pregnancies [1], [2]. Diagnosed as new onset hypertension and proteinuria after 20 weeks gestation [1], [2], preeclampsia is a leading cause of maternal [3] and fetal [4] morbidity and mortality. Oxidative stress (including low antioxidant levels) and angiogenic imbalance are believed to contribute to preeclampsia [5], [6], [7]. No intervention has been proven to prevent preeclampsia. Vitamin C and E lowered preeclampsia incidence by 60% in a pilot randomized controlled trial (RCT) in high-risk women [8]. Unfortunately, this effect was not confirmed in subsequent RCTs involving high [9], [10], [11], [12], [13] or low-risk women [13], [14], [15]. Although the discordant effects in large and small populations are likely due to reduced power in small studies, differences might also be explained by increased heterogeneity in large multicenter studies obscuring a responsive subset of women. The phenotype of the antioxidant [16] and pro-angiogenic [17] protein haptoglobin (Hp) predicts cardiovascular disease risk, and responsiveness to Vitamin E [18], [19], [20] or C and E [21] in some non-pregnant populations. We examined whether Hp phenotype might influence preeclampsia risk, or identify a subset of women who responded to vitamin supplementation. Hp stimulates angiogenesis [17] and acts as a powerful antioxidant by clearing free hemoglobin following hemolysis [16]. Hp's two different α alleles (α1, α2) give rise to three genetically determined phenotypes (1-1, 2-1 and 2-2) [16]. Size and structural differences (Figure 1) lead to functional differences between phenotypes [16]. Hp 1-1 is a stronger antioxidant [16], whereas Hp 2-2 is more angiogenic [17]. Less is known about the function of Hp 2-1, a highly variable heterodimer which is not commercially available. Hp 2-1 individuals have no Hp 2-2, very little Hp 1-1, and large amounts of the 2-1 forms which are structurally very different from Hp 1-1 or 2-2 [16]. 10.1371/journal.pone.0060479.g001Figure 1Hp phenotypes by Native and SDS PAGE.Panel A: Hp phenotyping of Hb-supplemented serum by Native PAGE on a 6% gel. Hp 2-2 (Lane 1) is a heterodimer with a series of slow moving bands. Hp 1-1 (Lane 4) is a homodimer with one fast moving band. Hp 2-1 (Lanes 2, 3, 5) has a band between the Hb/Hp 1-1 and Hb/Hp 2-2 bands, and several slow-moving bands in the same region as the Hb/Hp 2-2 bands. Panel B: In the rare Hp 2-1 M phenotype, a modification of the Hp α2 allele leads to overproduction of the fast migrating Hp 1-1 band, relative to the slower migrating Hp 2-1 bands. This results in a stronger Hb/Hp 1-1 band, and weaker Hb/Hp 2-1 bands, than in Hp 2-1. Panel C: Hp phenotyping of serum by SDS PAGE on a 12% gel. The Hp β band is common to all phenotyes. The presence of the Hp 1 and 2 alleles is indicated by the α1 and α2 bands, respectively. We chose to examine Hp phenotype and preeclampsia for three reasons. First, Hp's antioxidant and pro-angiogenic properties depend on phenotype [17], [22], [23]. Oxidative stress and reduced angiogenesis contribute to preeclampsia [5], [6]. In our recent case-control study, Hp 2-1 was associated with increased preeclampsia risk in non-hispanic white women, but not in black women [24]. We sought to confirm these findings in a larger cohort while extending the existing literature to include Hispanics. Second, any potential effect of Hp phenotype on preeclampsia risk or treatment response would affect a significant number of women. Hp 1-1, 2-1 and 2-2 are all common in whites, blacks and Hispanics (prevalences of 16%-50%) [25]. Third, Hp phenotype predicts cardiovascular disease risk, and responsiveness to Vitamin E [18], [19], [20] or C and E [21] in other non-pregnant populations. Hp 2-2 diabetics are twice as likely to have a cardiovascular event as Hp 1-1 and 2-1 diabetics [18], [21], [26]. Vitamin E supplementation eliminates this increased risk in Hp 2-2 diabetics, but has no effect in Hp 1-1 or 2-1 diabetics [18], [19], [27]. In contrast, Vitamin C may be harmful in Hp 2-2 diabetics as oxidation of Vitamin C may exacerbate pre-existing oxidative stress [28]. In post-menopausal women with coronary artery disease, Vitamin C and E decreased coronary artery diameter in Hp 2-2 diabetic women [21]. This harmful effect was not universal, as Vitamin C and E benefited Hp 1-1 women by significantly increasing coronary artery diameter [21]. We sought to determine whether Hp phenotype was associated with serious complications of pregnancy-associated hypertension (PAH) or preeclampsia in white, black or Hispanic women. We also examined whether Hp phenotype might influence responsiveness to antioxidant vitamins in preventing preeclampsia. Based on our previous findings, we hypothesized that Hp 2-1 would be associated with increased preeclampsia risk in white women, but not in black women. We also posited that phenotype would affect treatment response although directionality was difficult to predict based upon data from non-pregnant populations.\n\nBODY.MATERIALS AND METHODS.POPULATION:\nThis was a secondary analysis of a Eunice Kennedy Shriver National Institute of Child Health and Human Development Maternal-Fetal Medicine Units Network double-blind, multicenter RCT (NCT 00135707) in which 10,154 nulliparous, low risk women received daily doses of 400 IU of vitamin E and 1000 mg of vitamin C, or placebo, from 9-16 weeks gestation until delivery. The trial was conducted at 16 centers in the US between 2003 and 2008. Full details of the study were reported previously [14]. The trial was conducted in accordance with the Declaration of Helsinki of the World Medical Association. All subjects provided written, informed consent before participating. Hp phenotyping was performed at the University of Pittsburgh (Institutional Review Board approval PRO10010368).\n\nBODY.MATERIALS AND METHODS.OUTCOMES:\nThe primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes (aspartate aminotransferase ≥100 U per liter), elevated serum creatinine (≥1.5 mg/dl), thrombocytopenia (<100,000 platelets/mm3), eclampsia, fetal growth restriction (<3rd centile adjusted for sex and race/ethnicity), medically indicated preterm birth before 32 weeks due to PAH, or fetal death after 20 weeks gestation, or neonatal death. Hypertension was diagnosed using blood pressure measurements obtained during or after 20 weeks gestation, excluding intraoperative and intrapartum systolic blood pressures. Severe PAH was defined as a systolic blood pressure of 160 mm Hg or more, or a diastolic blood pressure of 110 mm Hg or more on two occasions 2 to 240 hours apart, or a single severely elevated blood pressure measurement that led to treatment with antihypertensive medication. Mild PAH was defined as systolic pressure between 140 and 159 mm Hg or a diastolic pressure between 90 mm Hg and 109 mm Hg on two occasions 2 to 240 hours apart. Mild preeclampsia was defined as mild PAH with proteinuria documented within the 3 days preceding or following the elevated blood pressure measurement. Proteinuria was defined as protein excretion ≥300 mg/24 hours, a dipstick urinary protein ≥2+, or protein:creatinine ≥0.35 in the absence of a 24-hour urine sample. Early onset preeclampsia was defined as preeclampsia requiring delivery before 37 weeks gestation. Severe preeclampsia was defined as preeclampsia with either severe PAH or protein excretion 35 g/24 hours, or as mild PAH with oliguria (<500 ml urine/24 hours), pulmonary edema (confirmed by radiography), or thrombocytopenia (<100,000 platelets/mm3).\n\nBODY.MATERIALS AND METHODS.DESIGN:\nThe RCT included 10,154 women, of whom 2,434 were enrolled in a nested prediction cohort. The RCT design specified that secondary analyses requiring blood be conducted in the prediction cohort, as these women consented to extra blood collection for prediction and pathophysiology studies. If additional subjects were required, investigators could obtain samples from the 7,720 women not in the prediction cohort, in whom very little blood was collected. We needed samples from all white, black, and Hispanic women with preeclampsia in the RCT to detect a 2-fold increase in preeclampsia risk in Hp 2-1, compared to Hp 1-1, in each racial group (α = 0.05; Power: 83% white, 96% black, 98% Hispanic). We determined Hp phenotype in the prediction cohort; then added a case-control cohort composed of women who were not in the prediction cohort. The case-control cohort allowed us to capture all cases without phenotyping 5,500 additional controls. Cases in the case-control cohort included all women with preeclampsia, the primary outcome, or both (n = 703 cases) who were not in the prediction cohort. We selected 2 controls per case (n = 1,406) [29], matched for center and race/ethnicity (Figure 2). 10.1371/journal.pone.0060479.g002Figure 2Study Flow Chart.Subjects lost to follow-up include 183 women who miscarried, one subject whose data were removed at the patient's request, and one subject whose data were removed at the institutional review board's request.\n\nBODY.MATERIALS AND METHODS.HP PHENOTYPING:\nHp phenotyping was performed as described previously [24], [30]. Serum samples (5 μl) supplemented with human hemoglobin (Sigma-Aldrich, St. Louis, MO) were phenotyped by hemoglobin-Hp complex peroxidase activity on native polyacrylamide gel electrophoresis (PAGE). Hemolyzed samples and samples with low Hp concentrations were phenotyped by Western blot following sodium dodecyl sulfate (SDS) PAGE of 1 to 6 μl of serum. Proteins from native and SDS gels were transferred to a polyvinylidene fluoride (PVDF) membrane (Millipore, Billerica, MA). SDS PAGE was followed by Western blotting with primary (1∶5,000, Polyclonal Rabbit Anti-Human Haptoglobin, DakoCytomation, Carpinteria, CA) and secondary (1∶25,000, Goat anti-Rabbit IgG horseradish peroxidase, Millipore) antibodies. Membranes were stained for peroxidase activity (SuperSignal West Pico Chemiluminescent Substrate, Fisher Scientific, Pittsburgh, PA) and imaged (FlouroChem Q System, Cell Biosciences, Santa Clara, CA). Hp phenotypes were identified by banding patterns of the Hp-hemoglobin complexes (native PAGE, Figure 1) or the Hp α1 and α2 alleles (SDS PAGE).\n\nBODY.MATERIALS AND METHODS.PLASMA ASCORBATE:\nBlood was collected in vials containing metaphosphoric acid, and frozen at −80 C [31]. Plasma ascorbic acid was measured by high performance liquid chromatography in the Molecular Epidemiology and Biomarker Laboratory (University of Minnesota) as described previously [31].\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nThe prediction and case-control cohorts were analyzed separately (statistical methods in Data S1); then pooled to maximize power. In the pooled analysis, sampling weights were applied to reduce potential bias arising from an overrepresentation of women with the primary outcome (but not preeclampsia) in the control group for preeclampsia analyses, and an oversampling of women with preeclampsia (but not the primary outcome) in the control group for primary outcome analyses. Patients from the prediction cohort were assigned a weight of one. Cases and controls were assigned a sample weight defined as the inverse of their probability of selection [32]. Sample weights included a post-stratification adjustment for race/ethnicity. Baseline characteristics were compared using the t-test for continuous variables and the Rao-Scott chi-square test [33] for categorical variables. Logistic regression models were constructed to assess the multivariable association between Hp phenotype and the incidence of each outcome, adjusting for variables that were associated with Hp phenotype in univariate analyses. Odds ratios and 95% confidence intervals were estimated from unconditional survey logistic regression models. Taylor series replicate weights were created to estimate standard errors and p values in the weighted analyses. Model covariates included randomization group, and demographic variables that differed between women with the Hp 1-1, 2-1 and 2-2 phenotypes (age, self-reported vitamin use, education level, diastolic blood pressure at screening). The weighted analysis gave similar results to an unweighted analysis (i.e. samples were not weighted by the post-stratification sample weights; data not shown). Results were considered to be statistically significant at α = 0.05 with no adjustment for multiple comparisons [34]. All p values are 2-sided. SAS Version 9.2 (SAS Institute Inc.) and STATA (StataCorp, 2011 Stata Statistical Software: Release 12, College Station, TX, StataCorp LP), using procedures appropriate for sample-weighted data, were used for all analyses.\n\nBODY.RESULTS.COHORTS:\nWe phenotyped 2,393 of the 2,434 women in the prediction cohort (Figure 2), excluding 40 women who miscarried, and one who had no samples collected. The case-control cohort included all remaining subjects who developed the primary outcome only (severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death) (n = 190), preeclampsia only (n = 280), or the primary outcome and preeclampsia (n = 233), and 1,406 controls matched for center and race/ethnicity. Women in the prediction cohort entered the study at an earlier gestational age (11.4±1.1) than women in the case-control cohort (14.0±2.0). Results of separate analyses for the prediction (Data S1; Tables S1, S2) and case-control cohorts (Data S1; Tables S3, S4, S5, S6, S7) are presented in Data S1. Results for the pooled cohort are described below.\n\nBODY.RESULTS.SUBJECTS:\nThe pooled dataset included 1,008 Hp 1-1, 2,129 Hp 2-1 and 1,261 Hp 2-2 women. Ninety-six women had the rare 2-1 M phenotype, which results from a modification of the α2 allele [16]. These women were excluded from subsequent analyses. There were 1,840 white women, 1,392 black women, 1,188 Hispanic women, and 74 women of \"Other\" race.\n\nBODY.RESULTS.WEIGHTING:\nThe weighted sample included 2,201 Hp 1-1 women, 4,840 Hp 2-1 women, 2,760 Hp 2-2 women, and 146 Hp 2-1 M women. The weighted cohort accurately reproduced the demographic characteristics of the original sample (Table S8). Demographic characteristics in the weighted cohort were not significantly different between the placebo and vitamin groups.\n\nBODY.RESULTS.SUBJECT CHARACTERISTICS:\nAs expected, race/ethnicity differed between women with the Hp 1-1, 2-1, 2-2 and 2-1 M phenotypes (Table 1). The phenotype distribution for each race/ethnicity was similar to previous reports (Table 1, S9) [25]. Hp phenotype groups did not differ with respect to pre-pregnancy BMI, blood pressure at randomization, smoking, history of a previous pregnancy, or family history of preeclampsia (Table 1). Subsequent analyses were adjusted for characteristics that differed between the Hp 1-1, 2-1 and 2-2 groups (race/ethnicity, age, education, vitamin use). Analyses were also adjusted for diastolic blood pressure at randomization, which differed between phenotype groups in the prediction cohort (Table S10) but not in the pooled cohort. The white and other groups were pooled for race/ethnicity adjustments, as phenotype prevalence in the other group was most similar to whites (Table S9). 10.1371/journal.pone.0060479.t001 Table 1 Subject Characteristics for the Weighted Pooled Cohort by Hp phenotype. Subject Characteristics Hp 1-1 (n = 2,201) Hp 2-1 (n = 4,840) Hp 2-2 (n = 2,760) Hp 2-1 M (n = 161) p Age – years 22.4±4.9 23.6±5.1* 24.0±5.1* 20.2±3.7* ,†,‡ \n <0.01 Gestational age at randomization – week 13.4±2.1 13.3±2.0 13.4±2.1 13.0±2.5 0.48 Race or ethnicity - % within phenotype NA White 618 (28%) 2,142 (44%) 1,408 (51%) 3 (2%) Black 779 (35%) 1,059 (22%) 542 (20%) 132 (82%) Hispanic 779 (35%) 1,555 (32%) 724 (26%) 25 (6%) Other 25 (1%) 84 (2%) 86 (3%) 1 (<1%) Pre-pregnancy body mass index - kg/m 2 \n 25.3±6.1 24.9±5.4 25.0±5.6 26.6±8.7 0.20 Smoked during pregnancy – n (%) 352 (16%) 721 (15%) 515 (19%) 23 (15%) 0.12 Education - years 12.2±2.9 12.7±2.7* 13.1±2.6* ,† \n 12.0±2.0 †,‡ \n <0.01 Prenatal/multivitamin use prior to randomization – n (%) 1,517 (69%) 3,759 (78%)* 2,194 (79%)* 132 (82%)* <0.01 Previous pregnancy – n (%) 520 (24%) 1,084 (22%) 731 (26%) 33 (21%) 0.16 Family history of preeclampsia – n (%) 319 (14%) 634 (13%) 341 (12%) 18 (11%) 0.64 Blood pressure at entry (9-12 weeks) Systolic – mm Hg 108±10 108±10 108±10 110±9 0.23 Diastolic – mm Hg 65±8 65±8 65±8 64±9 0.25 Values are mean ± SD or n (%). Abbreviations: NA; not applicable. Statistical comparisons for race/ethnicity were not performed, as cases and controls were matched for race/ethnicity. Significant difference (p<0.05) from: *Hp 1-1, † Hp 2-1, ‡ Hp 2-2. \n\nBODY.RESULTS.PRIMARY OUTCOME AND PREECLAMPSIA RISK:\nHp phenotype was not significantly associated with the risk of the primary outcome, preeclampsia, severe preeclampsia, or early onset preeclampsia in white/other, Hispanic or black women (Table 2). Women of \"other\" race comprised 4% of the white/other group, and results were not different when these women were excluded (data not shown). The relationship between Hp phenotype on early onset preeclampsia approached statistical significance (p = 0.06) in Hispanics. 10.1371/journal.pone.0060479.t002 Table 2 Outcomes according to race/ethnicity and Hp phenotype in the weighted pooled cohort. Outcome Weighted Cases (n) Hp 2-1 † OR (95% CI) Hp 2-2 † OR (95% CI) White/Other Primary Outcome * \n 216 0.87 (0.56, 1.35) 0.92 (0.58, 1.45) Preeclampsia 222 0.98 (0.62, 1.54) 1.18 (0.74, 1.88) Severe Preeclampsia 90 1.30 (0.65, 2.62) 1.31 (0.64, 2.69) Early Onset Preeclampsia 58 0.96 (0.41, 2.22) 1.40 (0.61, 3.22) Late Onset Preeclampsia 163 0.98 (0.59, 1.64) 1.10 (0.65, 1.87) Black Primary Outcome 243 1.26 (0.90, 1.76) 1.00 (0.66, 1.51) Preeclampsia 237 1.03 (0.74, 1.45) 0.92 (0.61, 1.38) Severe Preeclampsia 115 1.14 (0.73, 1.80) 0.88 (0.49, 1.55) Early Onset Preeclampsia 74 1.18 (0.67, 2.07) 0.95 (0.48, 1.88) Late Onset Preeclampsia 163 0.97 (0.66, 1.44) 0.90 (0.56, 1.46) Hispanic Primary Outcome 116 0.79 (0.49, 1.29) 0.85 (0.47, 1.54) Preeclampsia 217 0.77 (0.53, 1.12) 0.74 (0.46, 1.19) Severe Preeclampsia 76 0.71 (0.40, 1.24) 0.67 (0.32, 1.40) Early Onset Preeclampsia 42 0.41 (0.20, 0.86) 0.53 (0.21, 1.37) Late Onset Preeclampsia 175 0.90 (0.60, 1.37) 0.82 (0.49, 1.39) Hp 1-1 was the reference group for odds ratios. Hp phenotype was determined in 1,573 black women, 1,164 Hispanic women, and 1,432 women of white or other race. Each observation was weighted to reflect the pregnancy outcomes and racial distribution of the full cohort. * The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death. Abbreviations: OR, odds ratio; CI, confidence interval. † Adjusted for group (treatment vs. placebo), race/ethnicity, age, education, vitamin use and diastolic blood pressure at randomization. \n\nBODY.RESULTS.TREATMENT RESPONSE:\nVitamin C and E supplementation did not prevent the primary outcome or any form of preeclampsia among white/other, black or Hispanic women of any Hp phenotype after adjusting for vitamin use, age, education, and diastolic blood pressure at randomization (data not shown). In Hispanics, the effect of vitamin supplementation differed between Hp phenotype groups for preeclampsia (Hp phenotype*treatment group: p<0.01) and late onset preeclampsia (p = 0.02), but not for severe (p = 0.06) or early onset preeclampsia (p = 0.10). Vitamin supplementation significantly increased preeclampsia (odds ratio (OR) 3.30; 95% confidence interval (CI) 1.61-6.82, p<0.01, Table 3) and late onset preeclampsia risk (3.01; 1.38-6.57, p<0.01) in Hispanic Hp 2-2 women. This adverse effect was not seen in Hp 1-1 or 2-1 Hispanics (p>0.05), or in white/other or black women of any phenotype. 10.1371/journal.pone.0060479.t003 Table 3 Odds ratios for effect of treatment in Hispanic women in the weighted pooled cohort. Outcome Phenotype OR (95% CI) * \n p * \n Preeclampsia Hp 1-1 0.71 (0.39, 1.29) 0.27 Hp 2-1 0.95 (0.61, 1.50) 0.83 Hp 2-2 3.30 (1.61, 6.82) <0.01 Late Onset Preeclampsia Hp 1-1 0.70 (0.36, 1.35) 0.29 Hp 2-1 1.02 (0.63, 1.66) 0.92 Hp 2-2 3.01 (1.38, 6.57) <0.01 * Adjusted for vitamin use, age, education and diastolic blood pressure at randomization. Increased preeclampsia risk in Hispanic Hp 2-2 women was specific to the large quantities of vitamins C and E in study supplements. Prenatal/multivitamin use at randomization was not associated with an increased risk of preeclampsia (2.17; 0.53-8.71, p = 0.27) or late onset preeclampsia (2.85; 0.57–14.36, p = 0.20) in Hispanic Hp 2-2 women receiving placebo. Low rates of prenatal/multivitamin use at randomization (43% in Hispanics vs. 91% in non-Hispanics, Table S10) are unlikely to explain why the adverse effect of supplementation in Hp 2-2 women was only seen in Hispanics. Hispanic 2-2 women who were not taking prenatal vitamins at randomization had a smaller increase in preeclampsia risk with supplementation than Hispanic 2-2 women who were taking prenatal vitamins at randomization (2.01; 0.80–5.01, p = 0.14 vs. 7.83; 2.01–30.53, p<0.01, Table 4). 10.1371/journal.pone.0060479.t004 Table 4 Odds ratios for treatment effect in Hispanic Hp 2-2 women according to prenatal/multivitamin use at randomization. Outcome Vitamin Use at Randomization OR (95% CI) * \n p * \n Preeclampsia Yes 7.83 (2.01, 30.53) <0.01 No 2.01 (0.80, 5.01) 0.14 Late Onset Preeclampsia Yes 8.23 (1.64, 41.28) 0.01 No 1.91 (0.73, 5.00) 0.18 * Adjusted for age, education and diastolic blood pressure at randomization. \n\nBODY.RESULTS.HP PHENOTYPE, PRENATAL/MULTIVITAMIN USE AT RANDOMIZATION, AND PLASMA ASCORBATE:\nHp phenotype influences plasma ascorbate concentration [35], providing a potential mechanism by which Hp phenotype might influence treatment response to vitamin C and E. We examined the relationship between Hp phenotype and plasma ascorbate in a pre-planned analysis. Plasma ascorbate was measured at the pre-randomization screening visit in 264 women with known Hp phenotypes. Plasma ascorbate was lower in women who were not taking a prenatal/multivitamin at randomization (n = 34) compared to those who were (n = 230) (p = 0.02 for main effect, Table 5). Statistical comparisons within phenotype groups were not conducted, as the number of women who were not taking prenatal/multivitamins at randomization within each phenotype group was very small. Hp phenotype was not significantly associated with plasma ascorbate (p = 0.73). The interaction between Hp phenotype and vitamin use at randomization was not significant (p = 0.32). Plasma ascorbate was only measured in six Hispanic Hp 2-2 women; therefore we could not determine whether supplementation had a different effect on plasma ascorbate in this group. 10.1371/journal.pone.0060479.t005 Table 5 Plasma ascorbate concentrations according to Hp phenotype and prenatal/multivitamin use at randomization. Hp Phenotype Not Taking Prenatal/multivitaminsat Randomization Taking Prenatal/multivitaminsat Randomization All 8.4 (7.2, 9.5) (n = 34) 9.9 (9.4, 10.3) (n = 230)* 1-1 9.3 (7.4, 11.3) (n = 12) 9.6 (8.6, 10.5) (n = 48) 2-1 8.0 (6.3, 9.6) (n = 16) 9.8 (9.2, 10.4) (n = 126) 2-2 7.4 (4.7, 10.1) (n = 6) 10.2 (9.3, 11.1) (n = 56) Plasma ascorbate at the pre-randomization study visit was measured in a subset of 264 patients with known Hp phenotype. The vitamin and placebo groups were pooled, as study treatment had not yet started. Women not taking a prenatal vitamin at randomization had lower plasma ascorbate than women taking prenatal/multivitamins at randomization (*p = 0.02). Statistical comparisons within phenotype group were not conducted due to the small number of women not taking prenatal/multivitamins at randomization. Values are mean (95% confidence interval) in mg/L. \n\nBODY.DISCUSSION:\nHp phenotype was not associated with the risk of the primary outcome (a composite of PAH and serious adverse outcomes in the mother, her fetus or neonate) or any type of preeclampsia in white, black or Hispanic women. Moreover, daily vitamin C and E supplementation did not prevent the primary outcome or preeclampsia in women of any Hp phenotype. Supplementation increased the risk of preeclampsia and late onset preeclampsia by 3-fold in Hispanic Hp 2-2 women.\n\nBODY.DISCUSSION.HP PHENOTYPE AND THE RISK OF PREECLAMPSIA AND SERIOUS COMPLICATIONS OF PAH:\nWe found no relationship between Hp phenotype and the primary outcome or preeclampsia risk in white, black or Hispanic women. This study has two key strengths. First, it extends the existing literature by including large numbers of black and Hispanic women. Previous studies focused on predominantly white, non-Hispanic women [24],[36],[37],[38]. Studies examining the relationship between Hp phenotype and preeclampsia risk in different racial groups were recently recommended [39]. Second, this is the largest study of non-Hispanic white women completed to date, and was adequately powered to detect the difference that we observed in our previous case-control study [24]. This is particularly important, as the heterogeneous results and small sample sizes of previous European studies suggests that spurious findings may be common. At the time we initiated this investigation, two small, underpowered European studies had reported that preeclampsia risk was not different [38] or increased [36] in Hp 1-1 white women. Adequately powered studies [39], and studies in North American women, were needed. An Israeli study subsequently reported a 50% reduction in preeclampsia risk among Hp 1-1 women, compared to Hp 2-1 and 2-2 [24], [30], [37]. Our larger case-control study suggested that Hp 1-1 was only protective when compared to Hp 2-1, and not Hp 2-2 [24]. Unfortunately, we were unable to confirm this finding in the present cohort. However, we did confirm our previous preliminary data [24] suggesting that there was no relationship between Hp phenotype and preeclampsia risk in black women in a large adequately powered cohort. The trend (p = 0.06) towards an increased risk of early onset preeclampsia in Hp 2-1 and 2-2 Hispanics was likely a spurious result due to the small number of cases (n = 42).\n\nBODY.DISCUSSION.HP PHENOTYPE AND VITAMIN C AND E:\nAccumulating evidence suggests that preeclampsia and later life cardiovascular disease are intimately linked. These conditions share several risk factors (i.e. black race, obesity, diabetes) and pathophysiological processes (inflammation, oxidative stress, endothelial dysfunction, and impaired angiogenesis) [40], [41]. The American Heart Association recently recognized preeclampsia as a cardiovascular disease risk factor [41]. Hp phenotype affects cardiovascular disease risk and treatment response to antioxidant vitamins in some non-pregnant populations, particularly in diabetics [18], [19], [21], [26]. Uncomplicated pregnancy is an insulin resistant state [42], and gestational diabetes [43] and Type 1 diabetes [44] increase preeclampsia risk. Therefore, we examined the potential for Hp phenotype to influence the effectiveness of vitamin C and E in preventing preeclampsia and serious complications of PAH. Daily vitamin C and E did not prevent preeclampsia or serious complications of PAH in low risk primiparous women of any Hp phenotype. Supplementation actually increased preeclampsia risk by 3-fold in Hispanic Hp 2-2 women, however the mechanism requires further investigation. This effect was not explained by the strikingly lower prevalence of vitamin supplementation at randomization in Hispanics. Hispanic women have an increased risk of the metabolic syndrome and diabetes, conditions associated with oxidative stress [45]. Platelet volume, a marker of accelerated platelet turn over, was increased in Hispanic women compared to white or black women in the prediction cohort [46]. Accelerated platelet turnover could indicate low-grade intravascular coagulation and subsequent mild chronic hemolysis. The resulting oxidative stress would be worse in Hp 2-2 women due to the low antioxidant capacity of this phenotype. We speculate that supplementation may have exacerbated pre-existing oxidative stress, and therefore preeclampsia risk, in Hispanic Hp 2-2 women by increasing ascorbate radical formation. A definitive explanation will be sought in future studies. Supplementation increased the risk of late onset preeclampsia, but not early onset preeclampsia, in Hp 2-2 Hispanics. This could reflect pathophysiological differences between these conditions or lower power due to fewer cases of early onset preeclampsia. Several factors could explain the difference between our results and previous research in other populations [18], [19], [21], [26]. First, cardiovascular disease studies suggest that vitamin E alone has different effects from vitamin C and E combined [18], [19], [27]. Vitamin C and E can be beneficial or harmful, depending on Hp phenotype [27]. Coronary artery diameter increased following vitamin C and E supplementation in Hp 1-1 post-menopausal women with coronary artery disease. However, the pro-oxidant effects of vitamin C [28] combined with the reduced antioxidant capacity of Hp 2-2 may have contributed to accelerated coronary artery narrowing in diabetic Hp 2-2 post-menopausal women taking vitamins C and E [21]. Oxidative stress increases in pregnancy [40], suggesting that a similar mechanism could be active in Hp 2-2 pregnant women. In partial support of this hypothesis, we observed a 3-fold increase in preeclampsia risk in Hp 2-2 Hispanic women receiving vitamin C and E. The original RCT combined vitamins C and E, an approach used in cardiovascular disease prevention trials [47], [48], [49] that was successful in small clinical trials. Therefore, we cannot determine whether vitamin E or C alone would have been efficacious. However, the absence of a beneficial treatment effect in women of any Hp phenotype makes this possibility unlikely. Second, previous studies focused primarily on Type 2 diabetics [18], [19], [21], [26], however women with pre-existing diabetes were excluded from the current study. Insulin resistance increases during pregnancy, and 3% of women in the trial developed gestational diabetes [42]. The most extensively studied mechanism by which vitamin E reduces cardiovascular disease risk in Type 2 diabetes is by preventing the HDL dysfunction caused by cross-linking with glycosylated hemoglobin [28]. Hypothesizing that a similar mechanism might be active in Type 1 diabetes, members of our study team assessed the potential for Hp phenotype to modify the effectiveness of vitamin C and E in preventing preeclampsia in women with Type 1 diabetes [30]. Unfortunately, supplementation was not effective in preventing preeclampsia in women with Type 1 diabetes of any Hp phenotype [30]. Trials in Type 2 diabetics have not been conducted. Third, despite the similarities between preeclampsia and cardiovascular disease, there are also critical differences. Preeclampsia only occurs in pregnancy, and is proposed to result from an interaction between materials released by the placenta, and maternal constitutional characteristics that are largely cardiovascular risk factors [40]. It may be that transient exposure to insults in young women cannot be equated to years of chronic exposure to cardiovascular risk factors in older women.\n\nBODY.DISCUSSION.CONCLUSIONS:\nIn contrast to our hypothesis, our secondary analysis of a large RCT indicates that Hp phenotype does not influence preeclampsia risk, or identify a subset of nulligravidas for whom vitamins C and E would reduce preeclampsia risk in white, black or Hispanic women. Vitamins C and E supplementation may increase preeclampsia risk in Hispanic Hp 2-2 women, but due to the limitations of secondary analysis, this finding requires verification.\n\nBODY.SUPPORTING INFORMATION:\nData S1\nSupplemental Data. Supplemental data file containing methods and results for separate analyses of prediction cohort and case-control cohort, and an appendix listing members of the Eunice Kennedy Shriver National Institute of Child Health and Human Development Maternal-Fetal Medicine Units Network.(DOC)Click here for additional data file. Table S1\nPrediction cohort subject characteristics. Values are mean ± SD or n (% within phenotype). Significant difference (p<0.05) from: *Hp 1-1, †Hp 2-1, ‡Hp 2-2.(DOC)Click here for additional data file. Table S2\nOutcomes according to Hp phenotype in the prediction cohort. *Adjusted for treatment group (vitamins vs. placebo), age, race/ethnicity, education, vitamin use prior to randomization, and diastolic blood pressure at randomization. †The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclamptic seizure, fetal growth restriction, medically indicated preterm birth or perinatal death.(DOC)Click here for additional data file. Table S3\nSubject Characteristics for Subjects not in the Prediction Cohort, and in the Weighted Case-Control Cohort. Values are mean ± SD or n (%).(DOC)Click here for additional data file. Table S4\nCase-control cohort weighted subject characteristics. Values are mean ± SD or n (%). Abbreviations: NA; not applicable. Statistical comparisons for race/ethnicity were not performed, as cases and controls were matched for race/ethnicity. Significant difference (p<0.05) from: *Hp 1-1, †Hp 2-1, ‡Hp 2-2.(DOC)Click here for additional data file. Table S5\nOutcomes according to Hp phenotype and race/ethnicity in the case-control cohort. Hp 1-1 was the reference group for odds ratios. Hp phenotype was determined in 769 black women, 579 Hispanic women, and 716 women of white or other race. Each observation was weighted to reflect the pregnancy outcomes and racial distribution of the full cohort, excluding women in the prediction study (3,170 white/other women, 1,839 black women, 2,459 Hispanic women). Abbreviations: OR, odds ratio; CI, confidence interval. *Adjusted for group (treatment vs. placebo), race/ethnicity, age, education, vitamin use and diastolic blood pressure at randomization †Significantly different from 1, p<0.05.(DOC)Click here for additional data file. Table S6\nInteraction between Hp phenotype and treatment in case-control cohort. *Adjusted for vitamin use, age, education and diastolic blood pressure at randomization(DOC)Click here for additional data file. Table S7\nOdds ratios for the effect of treatment in Hispanic women in the case-control cohort. *Adjusted for vitamin use, age, education and diastolic blood pressure at randomization(DOC)Click here for additional data file. Table S8\nSubject Characteristics for the Original Study and the Weighted Pooled Cohort Values are mean ± SD or n (%). *Values for the 9,969 women with known pregnancy outcomes in the original study. Women who were lost to follow-up (n = 183) were excluded. Weights for this analysis were based on race/ethnicity and pregnancy outcome, therefore these women were not eligible to be selected for the pooled cohort analysis.(DOC)Click here for additional data file. Table S9\nHp phenotype prevalence by race in the weighted pooled cohort Values are n (% within race).\n(DOC)Click here for additional data file. Table S10\nSubject Characteristics for Hispanic vs. non-Hispanic Women in the Weighted Pooled Cohort. Values are mean ± SD unless otherwise indicated.(DOC)Click here for additional data file.\n\n**Question:** Compared to placebo from 9-16 weeks gestation until delivery. what was the result of daily doses of 400 IU of vitamin E and 1000 mg of vitamin C from 9-16 weeks gestation until delivery. on preeclampsia risk in Hispanic Hp 2-2 women.?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
919
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Haptoglobin Phenotype, Preeclampsia Risk and the Efficacy of Vitamin C and E Supplementation to Prevent Preeclampsia in a Racially Diverse Population\n\n ABSTRACT:\nHaptoglobin's (Hp) antioxidant and pro-angiogenic properties differ between the 1-1, 2-1, and 2-2 phenotypes. Hp phenotype affects cardiovascular disease risk and treatment response to antioxidant vitamins in some non-pregnant populations. We previously demonstrated that preeclampsia risk was doubled in white Hp 2-1 women, compared to Hp 1-1 women. Our objectives were to determine whether we could reproduce this finding in a larger cohort, and to determine whether Hp phenotype influences lack of efficacy of antioxidant vitamins in preventing preeclampsia and serious complications of pregnancy-associated hypertension (PAH). This is a secondary analysis of a randomized controlled trial in which 10,154 low-risk women received daily vitamin C and E, or placebo, from 9-16 weeks gestation until delivery. Hp phenotype was determined in the study prediction cohort (n = 2,393) and a case-control cohort (703 cases, 1,406 controls). The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death. Preeclampsia was a secondary outcome. Odds ratios were estimated by logistic regression. Sampling weights were used to reduce bias from an overrepresentation of women with preeclampsia or the primary outcome. There was no relationship between Hp phenotype and the primary outcome or preeclampsia in Hispanic, white/other or black women. Vitamin supplementation did not reduce the risk of the primary outcome or preeclampsia in women of any phenotype. Supplementation increased preeclampsia risk (odds ratio 3.30; 95% confidence interval 1.61–6.82, p<0.01) in Hispanic Hp 2-2 women. Hp phenotype does not influence preeclampsia risk, or identify a subset of women who may benefit from vitamin C and E supplementation to prevent preeclampsia.\n\nBODY.INTRODUCTION:\nPreeclampsia affects 2–7% of pregnancies [1], [2]. Diagnosed as new onset hypertension and proteinuria after 20 weeks gestation [1], [2], preeclampsia is a leading cause of maternal [3] and fetal [4] morbidity and mortality. Oxidative stress (including low antioxidant levels) and angiogenic imbalance are believed to contribute to preeclampsia [5], [6], [7]. No intervention has been proven to prevent preeclampsia. Vitamin C and E lowered preeclampsia incidence by 60% in a pilot randomized controlled trial (RCT) in high-risk women [8]. Unfortunately, this effect was not confirmed in subsequent RCTs involving high [9], [10], [11], [12], [13] or low-risk women [13], [14], [15]. Although the discordant effects in large and small populations are likely due to reduced power in small studies, differences might also be explained by increased heterogeneity in large multicenter studies obscuring a responsive subset of women. The phenotype of the antioxidant [16] and pro-angiogenic [17] protein haptoglobin (Hp) predicts cardiovascular disease risk, and responsiveness to Vitamin E [18], [19], [20] or C and E [21] in some non-pregnant populations. We examined whether Hp phenotype might influence preeclampsia risk, or identify a subset of women who responded to vitamin supplementation. Hp stimulates angiogenesis [17] and acts as a powerful antioxidant by clearing free hemoglobin following hemolysis [16]. Hp's two different α alleles (α1, α2) give rise to three genetically determined phenotypes (1-1, 2-1 and 2-2) [16]. Size and structural differences (Figure 1) lead to functional differences between phenotypes [16]. Hp 1-1 is a stronger antioxidant [16], whereas Hp 2-2 is more angiogenic [17]. Less is known about the function of Hp 2-1, a highly variable heterodimer which is not commercially available. Hp 2-1 individuals have no Hp 2-2, very little Hp 1-1, and large amounts of the 2-1 forms which are structurally very different from Hp 1-1 or 2-2 [16]. 10.1371/journal.pone.0060479.g001Figure 1Hp phenotypes by Native and SDS PAGE.Panel A: Hp phenotyping of Hb-supplemented serum by Native PAGE on a 6% gel. Hp 2-2 (Lane 1) is a heterodimer with a series of slow moving bands. Hp 1-1 (Lane 4) is a homodimer with one fast moving band. Hp 2-1 (Lanes 2, 3, 5) has a band between the Hb/Hp 1-1 and Hb/Hp 2-2 bands, and several slow-moving bands in the same region as the Hb/Hp 2-2 bands. Panel B: In the rare Hp 2-1 M phenotype, a modification of the Hp α2 allele leads to overproduction of the fast migrating Hp 1-1 band, relative to the slower migrating Hp 2-1 bands. This results in a stronger Hb/Hp 1-1 band, and weaker Hb/Hp 2-1 bands, than in Hp 2-1. Panel C: Hp phenotyping of serum by SDS PAGE on a 12% gel. The Hp β band is common to all phenotyes. The presence of the Hp 1 and 2 alleles is indicated by the α1 and α2 bands, respectively. We chose to examine Hp phenotype and preeclampsia for three reasons. First, Hp's antioxidant and pro-angiogenic properties depend on phenotype [17], [22], [23]. Oxidative stress and reduced angiogenesis contribute to preeclampsia [5], [6]. In our recent case-control study, Hp 2-1 was associated with increased preeclampsia risk in non-hispanic white women, but not in black women [24]. We sought to confirm these findings in a larger cohort while extending the existing literature to include Hispanics. Second, any potential effect of Hp phenotype on preeclampsia risk or treatment response would affect a significant number of women. Hp 1-1, 2-1 and 2-2 are all common in whites, blacks and Hispanics (prevalences of 16%-50%) [25]. Third, Hp phenotype predicts cardiovascular disease risk, and responsiveness to Vitamin E [18], [19], [20] or C and E [21] in other non-pregnant populations. Hp 2-2 diabetics are twice as likely to have a cardiovascular event as Hp 1-1 and 2-1 diabetics [18], [21], [26]. Vitamin E supplementation eliminates this increased risk in Hp 2-2 diabetics, but has no effect in Hp 1-1 or 2-1 diabetics [18], [19], [27]. In contrast, Vitamin C may be harmful in Hp 2-2 diabetics as oxidation of Vitamin C may exacerbate pre-existing oxidative stress [28]. In post-menopausal women with coronary artery disease, Vitamin C and E decreased coronary artery diameter in Hp 2-2 diabetic women [21]. This harmful effect was not universal, as Vitamin C and E benefited Hp 1-1 women by significantly increasing coronary artery diameter [21]. We sought to determine whether Hp phenotype was associated with serious complications of pregnancy-associated hypertension (PAH) or preeclampsia in white, black or Hispanic women. We also examined whether Hp phenotype might influence responsiveness to antioxidant vitamins in preventing preeclampsia. Based on our previous findings, we hypothesized that Hp 2-1 would be associated with increased preeclampsia risk in white women, but not in black women. We also posited that phenotype would affect treatment response although directionality was difficult to predict based upon data from non-pregnant populations.\n\nBODY.MATERIALS AND METHODS.POPULATION:\nThis was a secondary analysis of a Eunice Kennedy Shriver National Institute of Child Health and Human Development Maternal-Fetal Medicine Units Network double-blind, multicenter RCT (NCT 00135707) in which 10,154 nulliparous, low risk women received daily doses of 400 IU of vitamin E and 1000 mg of vitamin C, or placebo, from 9-16 weeks gestation until delivery. The trial was conducted at 16 centers in the US between 2003 and 2008. Full details of the study were reported previously [14]. The trial was conducted in accordance with the Declaration of Helsinki of the World Medical Association. All subjects provided written, informed consent before participating. Hp phenotyping was performed at the University of Pittsburgh (Institutional Review Board approval PRO10010368).\n\nBODY.MATERIALS AND METHODS.OUTCOMES:\nThe primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes (aspartate aminotransferase ≥100 U per liter), elevated serum creatinine (≥1.5 mg/dl), thrombocytopenia (<100,000 platelets/mm3), eclampsia, fetal growth restriction (<3rd centile adjusted for sex and race/ethnicity), medically indicated preterm birth before 32 weeks due to PAH, or fetal death after 20 weeks gestation, or neonatal death. Hypertension was diagnosed using blood pressure measurements obtained during or after 20 weeks gestation, excluding intraoperative and intrapartum systolic blood pressures. Severe PAH was defined as a systolic blood pressure of 160 mm Hg or more, or a diastolic blood pressure of 110 mm Hg or more on two occasions 2 to 240 hours apart, or a single severely elevated blood pressure measurement that led to treatment with antihypertensive medication. Mild PAH was defined as systolic pressure between 140 and 159 mm Hg or a diastolic pressure between 90 mm Hg and 109 mm Hg on two occasions 2 to 240 hours apart. Mild preeclampsia was defined as mild PAH with proteinuria documented within the 3 days preceding or following the elevated blood pressure measurement. Proteinuria was defined as protein excretion ≥300 mg/24 hours, a dipstick urinary protein ≥2+, or protein:creatinine ≥0.35 in the absence of a 24-hour urine sample. Early onset preeclampsia was defined as preeclampsia requiring delivery before 37 weeks gestation. Severe preeclampsia was defined as preeclampsia with either severe PAH or protein excretion 35 g/24 hours, or as mild PAH with oliguria (<500 ml urine/24 hours), pulmonary edema (confirmed by radiography), or thrombocytopenia (<100,000 platelets/mm3).\n\nBODY.MATERIALS AND METHODS.DESIGN:\nThe RCT included 10,154 women, of whom 2,434 were enrolled in a nested prediction cohort. The RCT design specified that secondary analyses requiring blood be conducted in the prediction cohort, as these women consented to extra blood collection for prediction and pathophysiology studies. If additional subjects were required, investigators could obtain samples from the 7,720 women not in the prediction cohort, in whom very little blood was collected. We needed samples from all white, black, and Hispanic women with preeclampsia in the RCT to detect a 2-fold increase in preeclampsia risk in Hp 2-1, compared to Hp 1-1, in each racial group (α = 0.05; Power: 83% white, 96% black, 98% Hispanic). We determined Hp phenotype in the prediction cohort; then added a case-control cohort composed of women who were not in the prediction cohort. The case-control cohort allowed us to capture all cases without phenotyping 5,500 additional controls. Cases in the case-control cohort included all women with preeclampsia, the primary outcome, or both (n = 703 cases) who were not in the prediction cohort. We selected 2 controls per case (n = 1,406) [29], matched for center and race/ethnicity (Figure 2). 10.1371/journal.pone.0060479.g002Figure 2Study Flow Chart.Subjects lost to follow-up include 183 women who miscarried, one subject whose data were removed at the patient's request, and one subject whose data were removed at the institutional review board's request.\n\nBODY.MATERIALS AND METHODS.HP PHENOTYPING:\nHp phenotyping was performed as described previously [24], [30]. Serum samples (5 μl) supplemented with human hemoglobin (Sigma-Aldrich, St. Louis, MO) were phenotyped by hemoglobin-Hp complex peroxidase activity on native polyacrylamide gel electrophoresis (PAGE). Hemolyzed samples and samples with low Hp concentrations were phenotyped by Western blot following sodium dodecyl sulfate (SDS) PAGE of 1 to 6 μl of serum. Proteins from native and SDS gels were transferred to a polyvinylidene fluoride (PVDF) membrane (Millipore, Billerica, MA). SDS PAGE was followed by Western blotting with primary (1∶5,000, Polyclonal Rabbit Anti-Human Haptoglobin, DakoCytomation, Carpinteria, CA) and secondary (1∶25,000, Goat anti-Rabbit IgG horseradish peroxidase, Millipore) antibodies. Membranes were stained for peroxidase activity (SuperSignal West Pico Chemiluminescent Substrate, Fisher Scientific, Pittsburgh, PA) and imaged (FlouroChem Q System, Cell Biosciences, Santa Clara, CA). Hp phenotypes were identified by banding patterns of the Hp-hemoglobin complexes (native PAGE, Figure 1) or the Hp α1 and α2 alleles (SDS PAGE).\n\nBODY.MATERIALS AND METHODS.PLASMA ASCORBATE:\nBlood was collected in vials containing metaphosphoric acid, and frozen at −80 C [31]. Plasma ascorbic acid was measured by high performance liquid chromatography in the Molecular Epidemiology and Biomarker Laboratory (University of Minnesota) as described previously [31].\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nThe prediction and case-control cohorts were analyzed separately (statistical methods in Data S1); then pooled to maximize power. In the pooled analysis, sampling weights were applied to reduce potential bias arising from an overrepresentation of women with the primary outcome (but not preeclampsia) in the control group for preeclampsia analyses, and an oversampling of women with preeclampsia (but not the primary outcome) in the control group for primary outcome analyses. Patients from the prediction cohort were assigned a weight of one. Cases and controls were assigned a sample weight defined as the inverse of their probability of selection [32]. Sample weights included a post-stratification adjustment for race/ethnicity. Baseline characteristics were compared using the t-test for continuous variables and the Rao-Scott chi-square test [33] for categorical variables. Logistic regression models were constructed to assess the multivariable association between Hp phenotype and the incidence of each outcome, adjusting for variables that were associated with Hp phenotype in univariate analyses. Odds ratios and 95% confidence intervals were estimated from unconditional survey logistic regression models. Taylor series replicate weights were created to estimate standard errors and p values in the weighted analyses. Model covariates included randomization group, and demographic variables that differed between women with the Hp 1-1, 2-1 and 2-2 phenotypes (age, self-reported vitamin use, education level, diastolic blood pressure at screening). The weighted analysis gave similar results to an unweighted analysis (i.e. samples were not weighted by the post-stratification sample weights; data not shown). Results were considered to be statistically significant at α = 0.05 with no adjustment for multiple comparisons [34]. All p values are 2-sided. SAS Version 9.2 (SAS Institute Inc.) and STATA (StataCorp, 2011 Stata Statistical Software: Release 12, College Station, TX, StataCorp LP), using procedures appropriate for sample-weighted data, were used for all analyses.\n\nBODY.RESULTS.COHORTS:\nWe phenotyped 2,393 of the 2,434 women in the prediction cohort (Figure 2), excluding 40 women who miscarried, and one who had no samples collected. The case-control cohort included all remaining subjects who developed the primary outcome only (severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death) (n = 190), preeclampsia only (n = 280), or the primary outcome and preeclampsia (n = 233), and 1,406 controls matched for center and race/ethnicity. Women in the prediction cohort entered the study at an earlier gestational age (11.4±1.1) than women in the case-control cohort (14.0±2.0). Results of separate analyses for the prediction (Data S1; Tables S1, S2) and case-control cohorts (Data S1; Tables S3, S4, S5, S6, S7) are presented in Data S1. Results for the pooled cohort are described below.\n\nBODY.RESULTS.SUBJECTS:\nThe pooled dataset included 1,008 Hp 1-1, 2,129 Hp 2-1 and 1,261 Hp 2-2 women. Ninety-six women had the rare 2-1 M phenotype, which results from a modification of the α2 allele [16]. These women were excluded from subsequent analyses. There were 1,840 white women, 1,392 black women, 1,188 Hispanic women, and 74 women of \"Other\" race.\n\nBODY.RESULTS.WEIGHTING:\nThe weighted sample included 2,201 Hp 1-1 women, 4,840 Hp 2-1 women, 2,760 Hp 2-2 women, and 146 Hp 2-1 M women. The weighted cohort accurately reproduced the demographic characteristics of the original sample (Table S8). Demographic characteristics in the weighted cohort were not significantly different between the placebo and vitamin groups.\n\nBODY.RESULTS.SUBJECT CHARACTERISTICS:\nAs expected, race/ethnicity differed between women with the Hp 1-1, 2-1, 2-2 and 2-1 M phenotypes (Table 1). The phenotype distribution for each race/ethnicity was similar to previous reports (Table 1, S9) [25]. Hp phenotype groups did not differ with respect to pre-pregnancy BMI, blood pressure at randomization, smoking, history of a previous pregnancy, or family history of preeclampsia (Table 1). Subsequent analyses were adjusted for characteristics that differed between the Hp 1-1, 2-1 and 2-2 groups (race/ethnicity, age, education, vitamin use). Analyses were also adjusted for diastolic blood pressure at randomization, which differed between phenotype groups in the prediction cohort (Table S10) but not in the pooled cohort. The white and other groups were pooled for race/ethnicity adjustments, as phenotype prevalence in the other group was most similar to whites (Table S9). 10.1371/journal.pone.0060479.t001 Table 1 Subject Characteristics for the Weighted Pooled Cohort by Hp phenotype. Subject Characteristics Hp 1-1 (n = 2,201) Hp 2-1 (n = 4,840) Hp 2-2 (n = 2,760) Hp 2-1 M (n = 161) p Age – years 22.4±4.9 23.6±5.1* 24.0±5.1* 20.2±3.7* ,†,‡ \n <0.01 Gestational age at randomization – week 13.4±2.1 13.3±2.0 13.4±2.1 13.0±2.5 0.48 Race or ethnicity - % within phenotype NA White 618 (28%) 2,142 (44%) 1,408 (51%) 3 (2%) Black 779 (35%) 1,059 (22%) 542 (20%) 132 (82%) Hispanic 779 (35%) 1,555 (32%) 724 (26%) 25 (6%) Other 25 (1%) 84 (2%) 86 (3%) 1 (<1%) Pre-pregnancy body mass index - kg/m 2 \n 25.3±6.1 24.9±5.4 25.0±5.6 26.6±8.7 0.20 Smoked during pregnancy – n (%) 352 (16%) 721 (15%) 515 (19%) 23 (15%) 0.12 Education - years 12.2±2.9 12.7±2.7* 13.1±2.6* ,† \n 12.0±2.0 †,‡ \n <0.01 Prenatal/multivitamin use prior to randomization – n (%) 1,517 (69%) 3,759 (78%)* 2,194 (79%)* 132 (82%)* <0.01 Previous pregnancy – n (%) 520 (24%) 1,084 (22%) 731 (26%) 33 (21%) 0.16 Family history of preeclampsia – n (%) 319 (14%) 634 (13%) 341 (12%) 18 (11%) 0.64 Blood pressure at entry (9-12 weeks) Systolic – mm Hg 108±10 108±10 108±10 110±9 0.23 Diastolic – mm Hg 65±8 65±8 65±8 64±9 0.25 Values are mean ± SD or n (%). Abbreviations: NA; not applicable. Statistical comparisons for race/ethnicity were not performed, as cases and controls were matched for race/ethnicity. Significant difference (p<0.05) from: *Hp 1-1, † Hp 2-1, ‡ Hp 2-2. \n\nBODY.RESULTS.PRIMARY OUTCOME AND PREECLAMPSIA RISK:\nHp phenotype was not significantly associated with the risk of the primary outcome, preeclampsia, severe preeclampsia, or early onset preeclampsia in white/other, Hispanic or black women (Table 2). Women of \"other\" race comprised 4% of the white/other group, and results were not different when these women were excluded (data not shown). The relationship between Hp phenotype on early onset preeclampsia approached statistical significance (p = 0.06) in Hispanics. 10.1371/journal.pone.0060479.t002 Table 2 Outcomes according to race/ethnicity and Hp phenotype in the weighted pooled cohort. Outcome Weighted Cases (n) Hp 2-1 † OR (95% CI) Hp 2-2 † OR (95% CI) White/Other Primary Outcome * \n 216 0.87 (0.56, 1.35) 0.92 (0.58, 1.45) Preeclampsia 222 0.98 (0.62, 1.54) 1.18 (0.74, 1.88) Severe Preeclampsia 90 1.30 (0.65, 2.62) 1.31 (0.64, 2.69) Early Onset Preeclampsia 58 0.96 (0.41, 2.22) 1.40 (0.61, 3.22) Late Onset Preeclampsia 163 0.98 (0.59, 1.64) 1.10 (0.65, 1.87) Black Primary Outcome 243 1.26 (0.90, 1.76) 1.00 (0.66, 1.51) Preeclampsia 237 1.03 (0.74, 1.45) 0.92 (0.61, 1.38) Severe Preeclampsia 115 1.14 (0.73, 1.80) 0.88 (0.49, 1.55) Early Onset Preeclampsia 74 1.18 (0.67, 2.07) 0.95 (0.48, 1.88) Late Onset Preeclampsia 163 0.97 (0.66, 1.44) 0.90 (0.56, 1.46) Hispanic Primary Outcome 116 0.79 (0.49, 1.29) 0.85 (0.47, 1.54) Preeclampsia 217 0.77 (0.53, 1.12) 0.74 (0.46, 1.19) Severe Preeclampsia 76 0.71 (0.40, 1.24) 0.67 (0.32, 1.40) Early Onset Preeclampsia 42 0.41 (0.20, 0.86) 0.53 (0.21, 1.37) Late Onset Preeclampsia 175 0.90 (0.60, 1.37) 0.82 (0.49, 1.39) Hp 1-1 was the reference group for odds ratios. Hp phenotype was determined in 1,573 black women, 1,164 Hispanic women, and 1,432 women of white or other race. Each observation was weighted to reflect the pregnancy outcomes and racial distribution of the full cohort. * The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death. Abbreviations: OR, odds ratio; CI, confidence interval. † Adjusted for group (treatment vs. placebo), race/ethnicity, age, education, vitamin use and diastolic blood pressure at randomization. \n\nBODY.RESULTS.TREATMENT RESPONSE:\nVitamin C and E supplementation did not prevent the primary outcome or any form of preeclampsia among white/other, black or Hispanic women of any Hp phenotype after adjusting for vitamin use, age, education, and diastolic blood pressure at randomization (data not shown). In Hispanics, the effect of vitamin supplementation differed between Hp phenotype groups for preeclampsia (Hp phenotype*treatment group: p<0.01) and late onset preeclampsia (p = 0.02), but not for severe (p = 0.06) or early onset preeclampsia (p = 0.10). Vitamin supplementation significantly increased preeclampsia (odds ratio (OR) 3.30; 95% confidence interval (CI) 1.61-6.82, p<0.01, Table 3) and late onset preeclampsia risk (3.01; 1.38-6.57, p<0.01) in Hispanic Hp 2-2 women. This adverse effect was not seen in Hp 1-1 or 2-1 Hispanics (p>0.05), or in white/other or black women of any phenotype. 10.1371/journal.pone.0060479.t003 Table 3 Odds ratios for effect of treatment in Hispanic women in the weighted pooled cohort. Outcome Phenotype OR (95% CI) * \n p * \n Preeclampsia Hp 1-1 0.71 (0.39, 1.29) 0.27 Hp 2-1 0.95 (0.61, 1.50) 0.83 Hp 2-2 3.30 (1.61, 6.82) <0.01 Late Onset Preeclampsia Hp 1-1 0.70 (0.36, 1.35) 0.29 Hp 2-1 1.02 (0.63, 1.66) 0.92 Hp 2-2 3.01 (1.38, 6.57) <0.01 * Adjusted for vitamin use, age, education and diastolic blood pressure at randomization. Increased preeclampsia risk in Hispanic Hp 2-2 women was specific to the large quantities of vitamins C and E in study supplements. Prenatal/multivitamin use at randomization was not associated with an increased risk of preeclampsia (2.17; 0.53-8.71, p = 0.27) or late onset preeclampsia (2.85; 0.57–14.36, p = 0.20) in Hispanic Hp 2-2 women receiving placebo. Low rates of prenatal/multivitamin use at randomization (43% in Hispanics vs. 91% in non-Hispanics, Table S10) are unlikely to explain why the adverse effect of supplementation in Hp 2-2 women was only seen in Hispanics. Hispanic 2-2 women who were not taking prenatal vitamins at randomization had a smaller increase in preeclampsia risk with supplementation than Hispanic 2-2 women who were taking prenatal vitamins at randomization (2.01; 0.80–5.01, p = 0.14 vs. 7.83; 2.01–30.53, p<0.01, Table 4). 10.1371/journal.pone.0060479.t004 Table 4 Odds ratios for treatment effect in Hispanic Hp 2-2 women according to prenatal/multivitamin use at randomization. Outcome Vitamin Use at Randomization OR (95% CI) * \n p * \n Preeclampsia Yes 7.83 (2.01, 30.53) <0.01 No 2.01 (0.80, 5.01) 0.14 Late Onset Preeclampsia Yes 8.23 (1.64, 41.28) 0.01 No 1.91 (0.73, 5.00) 0.18 * Adjusted for age, education and diastolic blood pressure at randomization. \n\nBODY.RESULTS.HP PHENOTYPE, PRENATAL/MULTIVITAMIN USE AT RANDOMIZATION, AND PLASMA ASCORBATE:\nHp phenotype influences plasma ascorbate concentration [35], providing a potential mechanism by which Hp phenotype might influence treatment response to vitamin C and E. We examined the relationship between Hp phenotype and plasma ascorbate in a pre-planned analysis. Plasma ascorbate was measured at the pre-randomization screening visit in 264 women with known Hp phenotypes. Plasma ascorbate was lower in women who were not taking a prenatal/multivitamin at randomization (n = 34) compared to those who were (n = 230) (p = 0.02 for main effect, Table 5). Statistical comparisons within phenotype groups were not conducted, as the number of women who were not taking prenatal/multivitamins at randomization within each phenotype group was very small. Hp phenotype was not significantly associated with plasma ascorbate (p = 0.73). The interaction between Hp phenotype and vitamin use at randomization was not significant (p = 0.32). Plasma ascorbate was only measured in six Hispanic Hp 2-2 women; therefore we could not determine whether supplementation had a different effect on plasma ascorbate in this group. 10.1371/journal.pone.0060479.t005 Table 5 Plasma ascorbate concentrations according to Hp phenotype and prenatal/multivitamin use at randomization. Hp Phenotype Not Taking Prenatal/multivitaminsat Randomization Taking Prenatal/multivitaminsat Randomization All 8.4 (7.2, 9.5) (n = 34) 9.9 (9.4, 10.3) (n = 230)* 1-1 9.3 (7.4, 11.3) (n = 12) 9.6 (8.6, 10.5) (n = 48) 2-1 8.0 (6.3, 9.6) (n = 16) 9.8 (9.2, 10.4) (n = 126) 2-2 7.4 (4.7, 10.1) (n = 6) 10.2 (9.3, 11.1) (n = 56) Plasma ascorbate at the pre-randomization study visit was measured in a subset of 264 patients with known Hp phenotype. The vitamin and placebo groups were pooled, as study treatment had not yet started. Women not taking a prenatal vitamin at randomization had lower plasma ascorbate than women taking prenatal/multivitamins at randomization (*p = 0.02). Statistical comparisons within phenotype group were not conducted due to the small number of women not taking prenatal/multivitamins at randomization. Values are mean (95% confidence interval) in mg/L. \n\nBODY.DISCUSSION:\nHp phenotype was not associated with the risk of the primary outcome (a composite of PAH and serious adverse outcomes in the mother, her fetus or neonate) or any type of preeclampsia in white, black or Hispanic women. Moreover, daily vitamin C and E supplementation did not prevent the primary outcome or preeclampsia in women of any Hp phenotype. Supplementation increased the risk of preeclampsia and late onset preeclampsia by 3-fold in Hispanic Hp 2-2 women.\n\nBODY.DISCUSSION.HP PHENOTYPE AND THE RISK OF PREECLAMPSIA AND SERIOUS COMPLICATIONS OF PAH:\nWe found no relationship between Hp phenotype and the primary outcome or preeclampsia risk in white, black or Hispanic women. This study has two key strengths. First, it extends the existing literature by including large numbers of black and Hispanic women. Previous studies focused on predominantly white, non-Hispanic women [24],[36],[37],[38]. Studies examining the relationship between Hp phenotype and preeclampsia risk in different racial groups were recently recommended [39]. Second, this is the largest study of non-Hispanic white women completed to date, and was adequately powered to detect the difference that we observed in our previous case-control study [24]. This is particularly important, as the heterogeneous results and small sample sizes of previous European studies suggests that spurious findings may be common. At the time we initiated this investigation, two small, underpowered European studies had reported that preeclampsia risk was not different [38] or increased [36] in Hp 1-1 white women. Adequately powered studies [39], and studies in North American women, were needed. An Israeli study subsequently reported a 50% reduction in preeclampsia risk among Hp 1-1 women, compared to Hp 2-1 and 2-2 [24], [30], [37]. Our larger case-control study suggested that Hp 1-1 was only protective when compared to Hp 2-1, and not Hp 2-2 [24]. Unfortunately, we were unable to confirm this finding in the present cohort. However, we did confirm our previous preliminary data [24] suggesting that there was no relationship between Hp phenotype and preeclampsia risk in black women in a large adequately powered cohort. The trend (p = 0.06) towards an increased risk of early onset preeclampsia in Hp 2-1 and 2-2 Hispanics was likely a spurious result due to the small number of cases (n = 42).\n\nBODY.DISCUSSION.HP PHENOTYPE AND VITAMIN C AND E:\nAccumulating evidence suggests that preeclampsia and later life cardiovascular disease are intimately linked. These conditions share several risk factors (i.e. black race, obesity, diabetes) and pathophysiological processes (inflammation, oxidative stress, endothelial dysfunction, and impaired angiogenesis) [40], [41]. The American Heart Association recently recognized preeclampsia as a cardiovascular disease risk factor [41]. Hp phenotype affects cardiovascular disease risk and treatment response to antioxidant vitamins in some non-pregnant populations, particularly in diabetics [18], [19], [21], [26]. Uncomplicated pregnancy is an insulin resistant state [42], and gestational diabetes [43] and Type 1 diabetes [44] increase preeclampsia risk. Therefore, we examined the potential for Hp phenotype to influence the effectiveness of vitamin C and E in preventing preeclampsia and serious complications of PAH. Daily vitamin C and E did not prevent preeclampsia or serious complications of PAH in low risk primiparous women of any Hp phenotype. Supplementation actually increased preeclampsia risk by 3-fold in Hispanic Hp 2-2 women, however the mechanism requires further investigation. This effect was not explained by the strikingly lower prevalence of vitamin supplementation at randomization in Hispanics. Hispanic women have an increased risk of the metabolic syndrome and diabetes, conditions associated with oxidative stress [45]. Platelet volume, a marker of accelerated platelet turn over, was increased in Hispanic women compared to white or black women in the prediction cohort [46]. Accelerated platelet turnover could indicate low-grade intravascular coagulation and subsequent mild chronic hemolysis. The resulting oxidative stress would be worse in Hp 2-2 women due to the low antioxidant capacity of this phenotype. We speculate that supplementation may have exacerbated pre-existing oxidative stress, and therefore preeclampsia risk, in Hispanic Hp 2-2 women by increasing ascorbate radical formation. A definitive explanation will be sought in future studies. Supplementation increased the risk of late onset preeclampsia, but not early onset preeclampsia, in Hp 2-2 Hispanics. This could reflect pathophysiological differences between these conditions or lower power due to fewer cases of early onset preeclampsia. Several factors could explain the difference between our results and previous research in other populations [18], [19], [21], [26]. First, cardiovascular disease studies suggest that vitamin E alone has different effects from vitamin C and E combined [18], [19], [27]. Vitamin C and E can be beneficial or harmful, depending on Hp phenotype [27]. Coronary artery diameter increased following vitamin C and E supplementation in Hp 1-1 post-menopausal women with coronary artery disease. However, the pro-oxidant effects of vitamin C [28] combined with the reduced antioxidant capacity of Hp 2-2 may have contributed to accelerated coronary artery narrowing in diabetic Hp 2-2 post-menopausal women taking vitamins C and E [21]. Oxidative stress increases in pregnancy [40], suggesting that a similar mechanism could be active in Hp 2-2 pregnant women. In partial support of this hypothesis, we observed a 3-fold increase in preeclampsia risk in Hp 2-2 Hispanic women receiving vitamin C and E. The original RCT combined vitamins C and E, an approach used in cardiovascular disease prevention trials [47], [48], [49] that was successful in small clinical trials. Therefore, we cannot determine whether vitamin E or C alone would have been efficacious. However, the absence of a beneficial treatment effect in women of any Hp phenotype makes this possibility unlikely. Second, previous studies focused primarily on Type 2 diabetics [18], [19], [21], [26], however women with pre-existing diabetes were excluded from the current study. Insulin resistance increases during pregnancy, and 3% of women in the trial developed gestational diabetes [42]. The most extensively studied mechanism by which vitamin E reduces cardiovascular disease risk in Type 2 diabetes is by preventing the HDL dysfunction caused by cross-linking with glycosylated hemoglobin [28]. Hypothesizing that a similar mechanism might be active in Type 1 diabetes, members of our study team assessed the potential for Hp phenotype to modify the effectiveness of vitamin C and E in preventing preeclampsia in women with Type 1 diabetes [30]. Unfortunately, supplementation was not effective in preventing preeclampsia in women with Type 1 diabetes of any Hp phenotype [30]. Trials in Type 2 diabetics have not been conducted. Third, despite the similarities between preeclampsia and cardiovascular disease, there are also critical differences. Preeclampsia only occurs in pregnancy, and is proposed to result from an interaction between materials released by the placenta, and maternal constitutional characteristics that are largely cardiovascular risk factors [40]. It may be that transient exposure to insults in young women cannot be equated to years of chronic exposure to cardiovascular risk factors in older women.\n\nBODY.DISCUSSION.CONCLUSIONS:\nIn contrast to our hypothesis, our secondary analysis of a large RCT indicates that Hp phenotype does not influence preeclampsia risk, or identify a subset of nulligravidas for whom vitamins C and E would reduce preeclampsia risk in white, black or Hispanic women. Vitamins C and E supplementation may increase preeclampsia risk in Hispanic Hp 2-2 women, but due to the limitations of secondary analysis, this finding requires verification.\n\nBODY.SUPPORTING INFORMATION:\nData S1\nSupplemental Data. Supplemental data file containing methods and results for separate analyses of prediction cohort and case-control cohort, and an appendix listing members of the Eunice Kennedy Shriver National Institute of Child Health and Human Development Maternal-Fetal Medicine Units Network.(DOC)Click here for additional data file. Table S1\nPrediction cohort subject characteristics. Values are mean ± SD or n (% within phenotype). Significant difference (p<0.05) from: *Hp 1-1, †Hp 2-1, ‡Hp 2-2.(DOC)Click here for additional data file. Table S2\nOutcomes according to Hp phenotype in the prediction cohort. *Adjusted for treatment group (vitamins vs. placebo), age, race/ethnicity, education, vitamin use prior to randomization, and diastolic blood pressure at randomization. †The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclamptic seizure, fetal growth restriction, medically indicated preterm birth or perinatal death.(DOC)Click here for additional data file. Table S3\nSubject Characteristics for Subjects not in the Prediction Cohort, and in the Weighted Case-Control Cohort. Values are mean ± SD or n (%).(DOC)Click here for additional data file. Table S4\nCase-control cohort weighted subject characteristics. Values are mean ± SD or n (%). Abbreviations: NA; not applicable. Statistical comparisons for race/ethnicity were not performed, as cases and controls were matched for race/ethnicity. Significant difference (p<0.05) from: *Hp 1-1, †Hp 2-1, ‡Hp 2-2.(DOC)Click here for additional data file. Table S5\nOutcomes according to Hp phenotype and race/ethnicity in the case-control cohort. Hp 1-1 was the reference group for odds ratios. Hp phenotype was determined in 769 black women, 579 Hispanic women, and 716 women of white or other race. Each observation was weighted to reflect the pregnancy outcomes and racial distribution of the full cohort, excluding women in the prediction study (3,170 white/other women, 1,839 black women, 2,459 Hispanic women). Abbreviations: OR, odds ratio; CI, confidence interval. *Adjusted for group (treatment vs. placebo), race/ethnicity, age, education, vitamin use and diastolic blood pressure at randomization †Significantly different from 1, p<0.05.(DOC)Click here for additional data file. Table S6\nInteraction between Hp phenotype and treatment in case-control cohort. *Adjusted for vitamin use, age, education and diastolic blood pressure at randomization(DOC)Click here for additional data file. Table S7\nOdds ratios for the effect of treatment in Hispanic women in the case-control cohort. *Adjusted for vitamin use, age, education and diastolic blood pressure at randomization(DOC)Click here for additional data file. Table S8\nSubject Characteristics for the Original Study and the Weighted Pooled Cohort Values are mean ± SD or n (%). *Values for the 9,969 women with known pregnancy outcomes in the original study. Women who were lost to follow-up (n = 183) were excluded. Weights for this analysis were based on race/ethnicity and pregnancy outcome, therefore these women were not eligible to be selected for the pooled cohort analysis.(DOC)Click here for additional data file. Table S9\nHp phenotype prevalence by race in the weighted pooled cohort Values are n (% within race).\n(DOC)Click here for additional data file. Table S10\nSubject Characteristics for Hispanic vs. non-Hispanic Women in the Weighted Pooled Cohort. Values are mean ± SD unless otherwise indicated.(DOC)Click here for additional data file.\n\n**Question:** Compared to Haptoglobin's (Hp) 2-2 phenotypes what was the result of Haptoglobin's (Hp) 1-1, 2-1, on preeclampsia risk?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
941
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effect of Yellow Tinted Intraocular Lenses on the Result of Frequency Doubling Perimetry after Cataract Surgery\n\n ABSTRACT.PURPOSE:\nTo investigate the effect of yellow tinted intraocular lenses (IOLs), implanted in the bag after phacoemulsification, on the result of frequency doubling technique (FDT) perimetry.\n\nABSTRACT.METHODS:\nFor 68 eyes of 52 patients, an IOL was implanted in the bag after phacoemulsification. The patients were categorized into three groups according to the type of implanted IOLs used. IOLs were selected randomly among two types of yellow IOLs (Acrysof SN60WF IOL, Hoya YA60BBR IOL) and a clear IOL was used as a control. A FDT Humphrey matrix was performed 2 months after the surgery. The mean deviation (MD) and pattern standard deviation (PSD) among these three groups was analyzed using Mann-Whitney U-test.\n\nABSTRACT.RESULTS:\nTwo months after the procedure, there was no significant difference between each of the three groups: the clear IOL and Hoya YA60BBR IOL (MD, p = 0.21; PSD, p = 0.27), the clear IOL and Alcon SN60WF IOL (MD, p = 0.11; PSD, p = 0.22), and the Hoya YA60BBR IOL and Alcon SN60WF IOL (MD, p = 0.33; PSD, p = 0.56).\n\nABSTRACT.CONCLUSIONS:\nWhen interpreting the result of the FDT after cataract surgery, the color and type of IOLs used should not be considered.\n\nBODY:\nThe cornea absorbs light with a wavelength below 295 nm and the crystalline lens absorbs light with a wavelength below 400 nm. Therefore, the human retina is protected from short-wavelength light. After cataract surgery with the implantation of a clear intraocular lens (IOL), the retina is now exposed to short-wavelength light. Consequently, yellow intraocular lenses have emerged as a blue blocker. From recent studies, it has been revealed that blue-light filtering IOLs may have protective effects against exudative age related macular degeneration [1]. Frequency doubling technique (FDT) perimetry is a well-known procedure used for glaucoma screening. Threshold tests potentially may identify glaucomatous defects sooner than other methods. The FDT stimulus predominantly stimulates the Magnocellular pathway, which is primarily involved in motion and flicker detection and represents about 10% of all retinal ganglion cells. The FDT perimeter (Welch Allyn, Skaneateles Falls, NY; Humphrey Instruments, San Leandro, CA, USA) uses a vertical sine wave grating of low spatial frequency (0.25-0.50 cycle/degree) that undergoes counterphase flickering at a high-temporal frequency (12-25 Hz). The contrast of the stimulus is modified in each location to calculate threshold sensitivity [2]. Some reports demonstrated the effect of cataracts on FDT perimetry, suggesting a significant improvement in mean deviation (MD) without a significant change in pattern standard deviation (PSD) after cataract surgery with clear IOLs [3,4]. In this study, we compared the effect of IOLs between each of the three groups (the clear IOL, Alcon Acrysof IQ SN60WF [Alcon, Fort Worth, TX, USA], and Hoya YA60BBR [Hoya Co., Tokyo, Japan]) with the result of FDT perimetry.\n\nBODY.MATERIALS AND METHODS:\nWe prospectively included 68 eyes of 52 patients who underwent cataract surgery and intraocular lens implantation with otherwise normal eye findings. Informed consent was obtained from each patient before enrollment in the study. All the procedures conformed to the tenets of the Declaration of Helsinki. Fifty-two patients had cataract surgery under similar preoperative conditions using the same phacoemulsification technique with the Infinity machine and Ozil system (Alcon). All the surgeries were performed without complications by the same surgeon (CKJ). The operated eye was randomized to receive an AcrySof IQ IOL (Alcon), a Hoya YA60BBR IOL, or a conventional clear IOL (OII Biovue3; BioVue, Ontario, CA, USA) in the capsular bag. Postoperatively, all patients were given gatifloxacin 0.3% (Gatiflo, Handok pharmaceuticals, Seoul, Korea) and prednisolone acetate 1% (Pred Forte; Allergan, Irvine, CA, USA), 4 times a day for 30 days. After 30 days, the application of the two eyedrops was decreased to by twice a day for 30 days. Patients with diabetes mellitus, poor cooperation, glaucoma (diagnosed by intraocular pressure, visual field exam, optic nerve morphology, and retinal nerve fiber layer findings), complicated cataracts and any negative events resulting from cataract surgery were excluded from the study. Routine postoperative ocular evaluations (visual acuity, refraction, IOP measurements, etc.) were carried out at each visit to avoid the bias of low visual acuity on perimetry. FDT perimetry was performed with a FDT Humphrey matrix (Carl Zeiss Meditec, Dublin, CA, USA). The FDT 24-2 threshold test was performed following the same procedure for eyes with clear or yellow IOL implantations with the refractive error fully corrected, two months after the operation. The global indices obtained by FDT were MD and PSD. The data was analyzed with the nonparametric Mann-Whitney U-test or Kruksal-Wallis test using a 5% significance level by SPSS ver. 13.0 (SPSS Inc., Chicago, IL, USA).\n\nBODY.RESULTS:\nA total of 52 people were enrolled in the study (28 men and 24 women). The mean age was 63.2 ± 11.5 years old (range, 52 to 78 years). Demographics are shown in Table 1. There were no significant differences among three IOL groups with respect to visual acuity and spherical equivalence in the preoperative and postoperative phase (Table 2). The results of FDT perimetry testing are shown in Figs. 1, 2, and 3. There were no statistically significant differences for MD and PSD between the clear IOL and Hoya YA60BBR IOL groups (MD: clear IOL, -5.09 ± 4.72 Db; Hoya YA60BBR IOL, -3.68 ± 3.83 dB; p = 0.21 / PSD: clear IOL, 4.26 ± 2.45 dB; Hoya YA60BBR IOL, 3.81 ± 1.99; p = 0.27, Mann-Whitney U-test). There were no statistically significant differences for MD and PSD the clear IOL and Alcon SN60WF IOL groups (MD: clear IOL, -5.09 ± 4.72 dB; Alcon SN60WF IOL, -2.47 ± 2.61 dB; p = 0.11 / PSD: clear IOL, 4.26 ± 2.45 dB; Alcon SN60WF IOL, 3.28 ± 1.23 dB; p = 0.22, Mann-Whitney U-test). No statistically significant differences were observed between the two types of yellow IOL groups (MD: Hoya YA60BBR, -3.68 ± 3.83 dB; Alcon SN60WF, -2.47 ± 2.61 dB; p = 0.33 / PSD: Hoya YA60BBR, 3.81 ± 1.99 dB; Alcon SN60WF, 3.28 ± 1.23; p = 0.56, Mann-Whitney U-test).\n\nBODY.DISCUSSION:\nFDT perimetry is barely affected by refractive errors. Therefore, it is more useful for glaucoma screening than the standard automated perimetry. In this study, FDT perimetry was used for comparing the effect of the colors of IOLs. No significant difference of postoperative visual acuity and spherical equivalence between each of three groups place greater accuracy on the results from FDT perimetry in this study. Another report by Ueda et al. [5] showed that there was no significant difference between clear and yellow IOLs for either MD or PSD. They compared the Hoya VA60BB (clear IOL) with Hoya YA60BB (yellow IOL). The results from FDT perimetry were also not influenced by the color of the IOL. In this study, the results from FDT perimerty of the conventional clear IOL and yellow tinted IOL were not significantly different. The two yellow tinted IOLs used in this study were Hoya YA60BBR and Alcon SN60WF. They are similar in many aspects. Both of the IOLs have hydrophobic acrylic optics with similar size (optic diameter 6 mm) and similar refractive index (YA60BBR, 1.51; SN60WF, 1.55). These blue-blockers absorb short wavelength light for retinal protection, but also absorb different ranges of wavelength. Hoya YA60BBR mainly absorbs lights in the range of 400-450 nm, while Alcon SN60WF absorbs light in the 400-500 nm wavelength range [6,7]. However, in this study, the different spectral transmittance between the two types of yellow tinted IOLs did not affect the results of FDT perimetry. Therefore, FDT perimetry results after cataract surgery can be interpretated without giving regard to the type of yellow tinted IOL used. FDT perimetry predominantly stimulates the magnocellular pathway by using frequency doubling illusions. Rod cells may transit signals to the lateral geniculate body through the magnocellular pathway of ganglion cells. This theory was suggested by three articles. As repoted by Lee et al. [8], rod inputs were much more apparent in magnocellular pathway cells. Purpura et al. [9] stated that the Magnocellular pathway is the predominant conveyor of information of spatial contrast to the visual cortex in the mesopic and scotopic illuminations. Sun et al. [10] reported that rod threshold areas are inferred to be mediated by the magnocellular pathway. As a result, we can assume that a frequency doubling illusion predominantly stimulates rod cells. Sensitivity of cones and rods varies in some conditions. First, the illuminance level determines the level of activity of photoreceptors. Rods are more active photoreceptors in scotopic conditions. Secondly, rod cells are more sensitive than cone cells in short wavelength light. Stabell and Stabell [11] suggested that the \"rod color\" is blue. Therefore, if a short wavelength blue ray light was blocked by the yellow tinted IOL, we can assume that the result of FDT perimetry would be altered. There was no significant difference in the result of FDT perimetry between conventional clear and yellow tinted IOLs. According to a previous report by Ueda et al. [5], this lack of a significant difference was explained through findings from a report by Antonio and associates that compared clear IOLs with yellow IOLs with respect to contrast sensitivity. At different spatial frequencies, contrast sensitivity exhibits similar values in patients implanted with clear and yellow tinted IOLs. The contrast sensitivity test predominantly activates the cone pathway, so this process may be mainly mediated by P-cell pathway. This is owed to the test condition: which consists of photopic conditions and requirements of the ability for high spatial frequency and fine visual discrimination. Although FDT perimetry determines contrast sensitivity for detecting frequency doubling stimulus, it is not sufficient to completely explain the process. Frequency doubling illusion has low spatial and high temporal frequency, so it selectively stimulates the magnocellular pathway. There are two considerations to explain why there are no significant differences in the result of FDT perimetry. First, the rod pathway has convergence pattern. In the case of the rod system, the convergence of preceding neurons to the ganglion cell is enormous and consist of 75,000 rods going through 5,000 rod bipolar cells to 250 AII amacrine cells to a single alpha ganglion cell [12]. Though the sensitivity of the rod cells are decreased due to the blocking short wavelength light, some population of rods can activate this pathway and transmit signals to the magnocellular pathway. Second, the testing condition of FDT perimetry is not a scotopic condition. In FDT perimetry, ambient room illumination was dim, and background luminance of the monitor was 45 cd/m2. Therefore, it is a mesopic condition. In scotopic conditions, only rod cells are active photoreceptors. Cone cells are also active photoreceptors in mesopic conditions. This means that cone cells are able to mediate visual signals in FDT perimetry. This study has several limitations that included the absence of preoperative FDT perimetry data, the small number of cases, and the absence of concurrent standard automated perimetry. A larger number of cases, more than 100 cases in each group, could make a difference in the findings between each group, or confirm the result seen in this study. Therefore, further studies are needed to make up for these points in the future. In conclusion, there were no significant changes in the result of FDT perimetry after cataract surgery between clear and yellow IOLs. When interpreting the results of FDT perimetry, the color and type of IOLs do not need to be taken into account.\n\n**Question:** Compared to clear IOL was used as a control. what was the result of IOLs were selected randomly among two types of yellow IOLs (Acrysof SN60WF IOL, Hoya YA60BBR IOL) on the result of frequency doubling technique (FDT) perimetry.?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
942
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Antibiotic use during pregnancy: a retrospective study of prescription patterns and birth outcomes at an antenatal clinic in rural Ghana\n\n ABSTRACT.BACKGROUND:\nBabies are increasingly being exposed to antibiotics intrapartum in the bid to reduce neonatal and maternal deaths. Intrapartum antibiotic exposure, including even those considered safe in pregnancy, have been associated with childhood obesity and compromised immunity. Data on the extent of antibiotic use, safety and its impact on birth outcomes and neonatal health in Sub-Saharan Africa is very limited. This study sought to ascertain the extent of antibiotic use in pregnancy and its effects on birth outcomes in a rural hospital in Ghana.\n\nABSTRACT.METHODS:\nThe study was a retrospective randomized study of mothers who delivered babies in a rural hospital between 2011 and 2015 in Ghana. A total of 412 mother/baby records out of 2100 pre-selected met the inclusion criteria of the study. Indicators of neonatal health used were birthweight, Apgar score, incidence of birth defects.\n\nABSTRACT.RESULTS:\nSixty five percent of pregnant women were administered antibiotics at some stage during pregnancy. Beta Lactam antibiotics accounted for more than 67% of all antibiotics prescribed. There was a statistically significant association between antibiotic exposure and pregnancy factors such as stage of pregnancy, parity and mode of delivery but not with socio-economic status of the mother. Intrapartum antibiotic exposure did not significantly affect the birthweight, incidence of congenital birth defect and mean Apgar scores. After adjusting for method of delivery, however, perinatal antibiotic use (24 h to delivery) was associated with lower mean Apgar scores. Birth weight was affected significantly by maternal socio-economic factors such as age and marital status.\n\nABSTRACT.CONCLUSION:\nSixty five percent of women attending the antenatal clinic received antibiotics. Intrapartum antibiotics did not affect early markers of neonatal health such as birthweight, congenital birth defect and mean Apgar scores. However, antibiotic use less than 24 h to delivery was associated with a decrease in mean APGAR score.\n\nBODY.BACKGROUND:\nThere has been a sturdy increase in global antibiotic prescription and consumption [1]. Several concerns have been raised about irrational use of antibiotics and its subsequent impact on drug resistance and cost of healthcare. Furthermore, the role that human microbiota, believed to account for the greater part of human body weight, plays in programming the immunity and metabolism of an individual has re-emphasized the need to regulate use of antibiotics [2]. Pregnancy presents with significant risk of complications which may lead to morbidity and mortality for mother-baby pairs. Globally, more than 50% of neonatal deaths occur in Sub-Saharan Africa because of infections and limited health resources [3, 4]. To manage the complications associated with pregnancy and motherhood, many medicines are employed. Antibiotics remain important in pregnancy and may be second to only iron and food supplement [5]. Due to limited health resources, such as laboratories to carry-out culture and sensitivity test, the extent of exposure of pregnant women and babies to antibiotics and the effects on their health may be underestimated. There have been several attempts to promote rational antibiotic use partly because of the risk of drug resistance, increased cost of healthcare, as well as the risk of teratogenicity in the developing foetus [6, 7]. However, recent studies have shown that there may be other insidious risk associated with antibiotic use in pregnancy independent on safety profile, mechanism of action or even class the of antibiotic [2]. For instance it is well documented that Caesarian birth or intrapartum antibiotic exposure contributes adversely to childhood obesity and compromised immunity of the baby [8, 9]. Unlike classical teratogenesis, these effects occur indirectly through alterations of the maternal microbiota without the agent necessarily crossing the placenta to affect the tissues of the foetus. Neonatal Health and Birth outcome is highly dependent on maternal health, nutrition and socio-economic factors. The use and choice of antibiotic during pregnancy depends primarily on maternal factors such as health, nutrition, mode of delivery and socio-economic factors. These maternal factors influence the indicators used to estimate birth outcomes and neonatal health such as birth weight, APGAR scores, and incidences of birth defects. To able to understand the full extent and effects of intrapartum antibiotics on mother and babies, there is a need to elucidate the complex interplay between maternal health, mode of delivery and socio-economic status and neonatal health. A key point in the millennium development goals is reducing maternal and neonatal mortality. To date there is little clinical data on the trends of antibiotic use during pregnancy and its effects on neonatal health from sub-Saharan Africa. Using Birth weight, APGAR scores, and incidences of birth defects as indicators of birth outcomes and early neonatal health and survival, this study examines the use of antibiotics during pregnancy, its effects on birth and neonatal health and as well as the effects of other maternal factors on antibiotic use in a rural health facility in Ghana.\n\nBODY.METHODS.STUDY AREA:\nSeventh-Day Adventist Hospital, Dominase is a 45 bed capacity hospital located in Dominase about 19 km off the Kumasi–Cape coast highway. The hospital has an obstetrics and gynaecology unit which runs an antenatal clinic with a resident gynaecologist, pharmacist and several midwives and nurses. To limit the impact of un-prescribed and undocumented antibiotic usage, the area chosen was predominately rural with no existent pharmacy. Furthermore, the National Health Insurance Scheme of Ghana runs a \"capitation policy\" where patients choose a particular primary health care facility for their needs without paying out of pocket. Patients who visit any other facility aside the allocated facility have to pay to access healthcare. This discouraged the tendency for mothers from seeking health services from other facilities as well as reduce self –medication. This is of particular importance because antibiotics can be obtained without prescription in Ghana.\n\nBODY.METHODS.STUDY DESIGN:\nThe design was a retrospective descriptive study. The primary source of data was prescriptions generated and recorded in patient medical records filed in the hospital between January 2011 and December 2015.\n\nBODY.METHODS.SAMPLING METHOD:\nFolder numbers of all live babies born in the hospital between January, 2011 and September, 2015 were retrieved from the registration book at the maternity unit. Five hundred folder numbers were selected for each of the 5 years from the registration book using systematic sample, making a total of 2500 folders. The contents of 2100 folders were available for the study. Each folder of mother-baby pairs was subsequently screened using the inclusion and exclusion criteria. A total of 412 mother-baby pairs met the selection criteria. The Inclusion criteria included;The mother should have attended at least three ANC at the hospital after confirmation of pregnancy either by ultrasonography or Human Chorionic Gonadotrophic (HCG) detection method.Should have delivered a live singleton baby at the facility.\n The exclusion criteria included;Twin gestationsDeliveries of referral cases from other hospitals\n\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData obtained from the medical records were transferred unto a data capture form, the secondary data was later collated, scrutinized and analysed using IBM Statistical package for Social Sciences (SPSS) version 21. Pearson Chi-square test was used to analyze categorical variables. Relative risk and Odds ratio were used as a measure of the degree of association between an event (outcome) and comparable groups. Students T- test and One way analysis of variance (one-way Anova) were used to analyze the difference in mean values between a set of comparable groups or values. Bonferroni was used as the post hoc test after one- way Anova. A p-value less than 0.05 was considered statistically significant.\n\nBODY.METHODS.POTENTIAL CONFOUNDERS:\nThe study assumed that antibiotic prescribed was synonymous with exposure which is not always the case. Because of the rural nature of the setting the hospital dispensary dispenses most of the medications prescribed at the same facility. This reduces the inconvenience of patients moving to city centers for their prescribed medicines. The dose prescribed, the duration of therapy and the frequency of administration were not taken into consideration. The design of the study could not totally account for possible out-of- hospital antibiotic use. The choice of a rural community with few chemical shop outlets and no pharmacy was expected to reduce this potential confounder. The National Health Insurance Scheme runs a \"capitation policy\" where a patient chooses one primary health care facility for their needs. The policy also reduces patients moving from one hospital to another because if a patient visits another hospital they have to pay hospital charges. This reduces attrition and tendency for self- medication.\n\nBODY.RESULTS.SOCIODEMOGRAPHIC DATA:\nThe biodata recorded were mothers' age at birth, gravida, marital status, occupational status and religious affiliation. Mothers' age at birth ranged from 13 to 44 years with a mean age of 26.3 ± 6.40 years. Most of the pregnant women were between 20 and 30 years, married and average number of gravida of 3.07 ± 2.03. (Table 1).Table 1Socio-demographic characteristics of participantsFrequencyPercentMarital statusMarried30482.2Single6617.8Mothers age at birth≥195613.720–3025161.4≤3110224.9OccupationFarmer12634.3Trader7821.3Student4311.7Teacher82.2Seamstress277.4Hairdresser5113.9Nurse3.8Unemployed277.4Othersa\n41.0Religious affiliationChristian33391.2Muslim246.6Traditional82.2GRAVIDA1–321268.8≤ 49631.2Total412100.0\naOthers include sprayer, baker, secretary and army officer\n\n\nBODY.RESULTS.PREVALENCE OF ANTIBIOTIC EXPOSURE:\nAbout two out of three women (65.8%, n = 271) attending ANC clinic in the hospital were treated with an antibiotic at a point during pregnancy. There was no association between the socio-economic status of a woman, i.e., marital status, occupation, age etc. with the odds of being treated with antibiotics. However antibiotic use was associated multiple gravida and Cesarean section. The odds ratio for antibiotic exposure during Caesarean were 13.8 times higher (95% CI, 5.9–32.5, K = 55.47) than that for women who delivered vaginally. There was an association (p < 0.001) between antibiotic exposure with advance in stage of pregnancy (Fig. 1). Most exposures (79.0%) were third trimester exposures. In fact 42.4% all antibiotic treatments occurred within 24 h to delivery and about 84% of these women went through Caesarean section. First trimester exposures accounted for just 16.6%. Furthermore 4.40% women received antibiotics in all three trimesters. Interestingly, 5.5% went through gestation and Caesarian without receiving any antibiotics at all (Table 2).Fig. 1Antibiotic use during pregnancy during the different stages of pregnancy. 12 respondents received antibiotics at all stages of pregnancy. 141 did not received antibiotics at all. 13 received during the 1st and 2nd trimesters only. 47 received antibiotics during the 2nd and 3rd trimesters only. 14 respondents received during the 1st and 3rd trimesters only. 141 respondents were treated during the 3rd trimester\nTable 2Antibiotic exposure in pregnant women of different socio economic class and mode of deliveryAntibiotic exposureYesNo TotalStatisticsMarital statusMarried197(64.8%)107(35.2%)304\nP = 0.77,DF = 1,Single44(66.7%)22(33.3%)66k = 0.83,OR = 0.97Age of mother at birth≥ 1932(57.1%)24(43.9%)43\nP = 0.357, DF = 220–30167(66.5%)8(33.5%)238K = 2.058≤ 3169(67.6%)33(32.4%)96OccupationEmployed182(64.6%)105(37.4%)126\nP = 0.61, K = 10.06Student27(62.8%)16(37.2%)43Unemployed19(70.4%)8(29.6%)27ReligionChristian219(65.8%)114(34.2%)333\nP = 0.368, K = 1.99Muslim15(62.5%)9(37.5%)24DF = 2Traditional5(62.5%)3(37.5%)8GRAVIDA1–3117(60.9%75(39.1%)192\nP = 0.066, K = 3.38≤467(72.0%)26(28.0%)93DF = 1 OR = 0.83Method of deliveryC. SectioN104(94.5%)6(5.5%)110\nP < 0.001,K = 55.47Vaginal167(55.3%)135(44.7%)302DF = 1 OR =13.8Total\n271(65.8%)\n\n141(34.2%)\n\n412\n\n Prevalence of antibiotic use in pregnancy showed a steady increase of 54.8% in 2013 to a peak of 77.8% in 2015.\n\nBODY.RESULTS.CLASSES OF ANTIBIOTICS PRESCRIBED:\nBeta lactams antibiotics i.e. cephalosporins and penicillins accounted for more than 67% of antibiotics used whilst metronidazole was used in 24% of pregnant women. Quinolones and sulphonamide/trimethoprim represented about of 2.1% antibiotics prescribed. Cephalosporins, penicillins and metronidazole showed a steep rise in use from first trimester to third trimester (Fig. 2). Macrolide use diminished as pregnancy progressed. Use of quinolones and co-trimoxazole however declined to zero in third trimester. 60% of the patients were given antibiotic because of a urinary tract infection, 12% due to respiratory tract infections, 14.3% as premedication for caesarean section. Other minor reasons included gastroenteritis (5.5%), premature rapture of membranes (2.8%), pelvic inflammatory disease (1.7%) and unspecified indications (3.7%).Fig. 2Classes of antibiotic use and the trimester of exposure\n\n\nBODY.RESULTS.SAFETY AND APPROPRIATENESS IN PREGNANCY:\nThe FDA classifications for the drugs prescribed in the study fell into classes B, C and D with classes A and X antibiotic not prescribed. Most of the drugs fell into category B with 96.6% followed by C and D with 2.9% and 0.5% respectively. It was observed that some antibiotics were prescribed without due justification i.e. there was drug –diagnosis mismatch and this constituted 3.5% of all prescriptions issued.\n\nBODY.RESULTS.INTRAPARTUM ANTIBIOTIC EXPOSURE AND BIRTH OUTCOMES.ANTIBIOTIC EXPOSURE AND BIRTHWEIGHT:\nThe mean birthweight of babies who delivered at the facility was 2.96 ± 0.56 kg. Intrapartum antibiotic treatment did not affect mean birthweight. Similarly there was no statistically significant relationship between mode of delivery and birthweight. Generally, there was a direct relationship between mothers' socio-economic status and birthweight. Babies born of single mothers had lower birthweights compared to those born of married mothers (p < 0.001, F = 12.74) (Table 3).Table 3Effect intrapartum antibiotic exposure on weight at birthBirth WeightNMean ± SDMarital statusMarried298\n3.02 ± 0.56\n***\nSingle652.75 ± 0.53Age group≥1955\n2.75 ± 0.48\n***\n20–302422.95 ± 056≤311003.10 ± 0.57Occupational statusEmployed287\n3.00 ± 0.57\n*\nStudent432.75 ± 0.58Unemployed273.00 ± 0.47ReligionChristian3242.98 ± 0.57Muslim232.82 ± 0.48Traditional82.88 ± 0.52GRAVIDA1–3206\n2.87 ± 0.51\n***\n4 And Above953.08 ± 0.57Mode of deliveryCeasarian section1032.96 ± 0.59Vaginal delivery2972.96 ± 0.55Antibiotic exposureYes2612.96 ± 0.58No1392.97 ± 0.53Results represented as mean ± SD. * means p < 0.05, ***p < 0.001\n\n\nBODY.RESULTS.INTRAPARTUM ANTIBIOTIC EXPOSURE AND BIRTH OUTCOMES.ANTIBIOTIC EXPOSURE AND APGAR SCORES:\nThere was no statistically significant relationship between antibiotic prescription during pregnancy and mean Apgar scores (i.e. 8.242 ± 1.3 vs 8.32 ± 1.62 (p = 0.414, F = 1.59)). However when mothers were treated with antibiotics in less than 24 h to parturition, there was a statistically significant decrease in mean Apgar than those not treated with antibiotics i.e. (7.86 ± 1.72 vs 8.4 ± 1.30 p = 0.002, F = 13.65) after adjusting for the effects of the mode of delivery. Babies born naturally had mean Apgar scores statistically higher than those born by Caesarean section i.e. 8.42 ± 1.23 vs 7.86 ± 1.87(p = 0.003, F = 8.98). Younger mothers, unmarried women and mothers with less gravida had significantly higher mean Apgar scores compared with older (p = 0.046), married (p = 0.018) and higher gravida (p = 0.034) mothers respectively (Table 4).Table 4Effect intrapartum antibiotic exposure on mean APGAR scoreAPGAR Scores\nN\nMean ± SDMarital statusMarried282\n8.26 ± 1.37\n*\nSingle628.69 ± 0.90Age groups≤1953\n8.60 ± 1.10*\n20–302328.41 ± 1.28≥31938.10 ± 1.43Occupational statusEmployed2728.32 ± 1.32Student418.40 ± 1.25Unemployed258.48 ± 0.95ReligionChristian3118.33 ± 1.34Muslim218.43 ± 1.15Traditional19.00 ± 0.00GRAVIDA1–3200\n8.54 ± 1.22\n*\n≥4888.20 ± 1.27Mode of deliveryCeasarian section100\n8.03 ± 1.67\n**\nVaginal delivery2818.47 ± 1.11Antibiotic exposureYes2488.30 ± 1.30No1338.47 ± 1.26Perinatal antibiotic exposureYes104\n7.96 ± 1.70\n**\nNo2778.51 ± 1.07Results represented as mean ± SD. *means p < 0.05, **p < 0.01\n\n\nBODY.RESULTS.INTRAPARTUM ANTIBIOTIC EXPOSURE AND BIRTH OUTCOMES.ANTIBIOTIC EXPOSURE AND CONGENITAL BIRTH DEFECT:\nSix birth defects were seen in the study representing 1.5% of all deliveries i.e. Cleft lip, mega cephalous, cleft palate and webbed feet. Intrapartum antibiotic exposure irrespective of trimester of exposure was not associated with any significant risk of birth defect (RR = 1, P = 0.97). There was no statistically significant association between marital status (p = 0.94), mothers age at birth (p = 0.361), occupation (p = 0.630), parity (p = 0.94) and religion (p = 0.49) and birth defects. Females had relatively higher odds ratio of incidence of birth defects than males. i.e. 2.2% vs 1.0% (OR = 2.2, P = 0.343, 95% CI).\n\nBODY.DISCUSSION:\nThe study was design to ascertain the prevalence antibiotic prescription among pregnant women attending ANC. About 65.8% of pregnant women were exposed to an antibiotic at some point during pregnancy. This figure was higher than an earlier report of about by 59.9% in the eastern region of Ghana [10] as well as the WHO/International Network for the Rational Use of Drugs figures. Women who went through Caesarean section were more likely to receive antibiotics to control wound infection and endometritis [11, 12]. Risk of antibiotic exposure was highest in the last trimester but it corresponded with the increased incidence of respiratory tract infections, urinary tract infections and gastritis during third trimester. This is of a concern because acquisition of specific foetal immunity begins in the third trimester and it is highly dependent on microbiome which can be altered by antibiotics [9, 13]. Most of the antibiotic used are classified as safe in pregnancy by the FDA. Macrolides though safe, directly stimulate the motilin receptors inducing Gastro-intestinal discomfort which may be undesirable in pregnancy [14, 15]. Anatomically the uterus sits directly on the urinary bladder and so increase in weight of the uterus as pregnancy advances exerts a proportional pressure on the bladder and stomach leading to increased incidence of UTI, gastritis and heartburns [16, 17]. Babies exposed to intrapartum antibiotics had mean weights slightly lower than those unexposed. A study by Vidal et al., (2013) [8] reported a similar pattern and it was further observed that such babies quickly gained much more weight after delivery leading to obesity in childhood. This study also noted a strong association between mother's socio-economic factors and birthweights as has been reported elsewhere [18–20]. This may be due to economic stability, marital support etc [18, 21, 22]. Babies born to Mothers who received antibiotics less than 24 h to delivery or subsequently had caesarean birth had lower mean Apgar scores much even after adjusting for other confounding factors. This correlates with other reports in which babies born vaginally had much higher Apgar scores at 5 min compared to caesarean born [23]. Older mothers or women with higher gravida had babies with mean Apgar scores lower than younger mothers or women with less gravida. A total six birth defects (15 per 10,000 live births) were recorded in this study. This figure was similar to other prevalence birth defects reported other authors [24].\n\nBODY.CONCLUSION:\n65% of pregnant women received antibiotics at some stage during pregnancy. Antibiotic use did not affect mean birth weight, APGAR score or incidence of birth defects. However, antibiotic use less than 24 h to delivery was associated with a decrease in mean APGAR score.\n\n**Question:** Compared to no antibiotics what was the result of antibiotics on birthweight, incidence of congenital birth defect and mean Apgar scores?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
943
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Antibiotic use during pregnancy: a retrospective study of prescription patterns and birth outcomes at an antenatal clinic in rural Ghana\n\n ABSTRACT.BACKGROUND:\nBabies are increasingly being exposed to antibiotics intrapartum in the bid to reduce neonatal and maternal deaths. Intrapartum antibiotic exposure, including even those considered safe in pregnancy, have been associated with childhood obesity and compromised immunity. Data on the extent of antibiotic use, safety and its impact on birth outcomes and neonatal health in Sub-Saharan Africa is very limited. This study sought to ascertain the extent of antibiotic use in pregnancy and its effects on birth outcomes in a rural hospital in Ghana.\n\nABSTRACT.METHODS:\nThe study was a retrospective randomized study of mothers who delivered babies in a rural hospital between 2011 and 2015 in Ghana. A total of 412 mother/baby records out of 2100 pre-selected met the inclusion criteria of the study. Indicators of neonatal health used were birthweight, Apgar score, incidence of birth defects.\n\nABSTRACT.RESULTS:\nSixty five percent of pregnant women were administered antibiotics at some stage during pregnancy. Beta Lactam antibiotics accounted for more than 67% of all antibiotics prescribed. There was a statistically significant association between antibiotic exposure and pregnancy factors such as stage of pregnancy, parity and mode of delivery but not with socio-economic status of the mother. Intrapartum antibiotic exposure did not significantly affect the birthweight, incidence of congenital birth defect and mean Apgar scores. After adjusting for method of delivery, however, perinatal antibiotic use (24 h to delivery) was associated with lower mean Apgar scores. Birth weight was affected significantly by maternal socio-economic factors such as age and marital status.\n\nABSTRACT.CONCLUSION:\nSixty five percent of women attending the antenatal clinic received antibiotics. Intrapartum antibiotics did not affect early markers of neonatal health such as birthweight, congenital birth defect and mean Apgar scores. However, antibiotic use less than 24 h to delivery was associated with a decrease in mean APGAR score.\n\nBODY.BACKGROUND:\nThere has been a sturdy increase in global antibiotic prescription and consumption [1]. Several concerns have been raised about irrational use of antibiotics and its subsequent impact on drug resistance and cost of healthcare. Furthermore, the role that human microbiota, believed to account for the greater part of human body weight, plays in programming the immunity and metabolism of an individual has re-emphasized the need to regulate use of antibiotics [2]. Pregnancy presents with significant risk of complications which may lead to morbidity and mortality for mother-baby pairs. Globally, more than 50% of neonatal deaths occur in Sub-Saharan Africa because of infections and limited health resources [3, 4]. To manage the complications associated with pregnancy and motherhood, many medicines are employed. Antibiotics remain important in pregnancy and may be second to only iron and food supplement [5]. Due to limited health resources, such as laboratories to carry-out culture and sensitivity test, the extent of exposure of pregnant women and babies to antibiotics and the effects on their health may be underestimated. There have been several attempts to promote rational antibiotic use partly because of the risk of drug resistance, increased cost of healthcare, as well as the risk of teratogenicity in the developing foetus [6, 7]. However, recent studies have shown that there may be other insidious risk associated with antibiotic use in pregnancy independent on safety profile, mechanism of action or even class the of antibiotic [2]. For instance it is well documented that Caesarian birth or intrapartum antibiotic exposure contributes adversely to childhood obesity and compromised immunity of the baby [8, 9]. Unlike classical teratogenesis, these effects occur indirectly through alterations of the maternal microbiota without the agent necessarily crossing the placenta to affect the tissues of the foetus. Neonatal Health and Birth outcome is highly dependent on maternal health, nutrition and socio-economic factors. The use and choice of antibiotic during pregnancy depends primarily on maternal factors such as health, nutrition, mode of delivery and socio-economic factors. These maternal factors influence the indicators used to estimate birth outcomes and neonatal health such as birth weight, APGAR scores, and incidences of birth defects. To able to understand the full extent and effects of intrapartum antibiotics on mother and babies, there is a need to elucidate the complex interplay between maternal health, mode of delivery and socio-economic status and neonatal health. A key point in the millennium development goals is reducing maternal and neonatal mortality. To date there is little clinical data on the trends of antibiotic use during pregnancy and its effects on neonatal health from sub-Saharan Africa. Using Birth weight, APGAR scores, and incidences of birth defects as indicators of birth outcomes and early neonatal health and survival, this study examines the use of antibiotics during pregnancy, its effects on birth and neonatal health and as well as the effects of other maternal factors on antibiotic use in a rural health facility in Ghana.\n\nBODY.METHODS.STUDY AREA:\nSeventh-Day Adventist Hospital, Dominase is a 45 bed capacity hospital located in Dominase about 19 km off the Kumasi–Cape coast highway. The hospital has an obstetrics and gynaecology unit which runs an antenatal clinic with a resident gynaecologist, pharmacist and several midwives and nurses. To limit the impact of un-prescribed and undocumented antibiotic usage, the area chosen was predominately rural with no existent pharmacy. Furthermore, the National Health Insurance Scheme of Ghana runs a \"capitation policy\" where patients choose a particular primary health care facility for their needs without paying out of pocket. Patients who visit any other facility aside the allocated facility have to pay to access healthcare. This discouraged the tendency for mothers from seeking health services from other facilities as well as reduce self –medication. This is of particular importance because antibiotics can be obtained without prescription in Ghana.\n\nBODY.METHODS.STUDY DESIGN:\nThe design was a retrospective descriptive study. The primary source of data was prescriptions generated and recorded in patient medical records filed in the hospital between January 2011 and December 2015.\n\nBODY.METHODS.SAMPLING METHOD:\nFolder numbers of all live babies born in the hospital between January, 2011 and September, 2015 were retrieved from the registration book at the maternity unit. Five hundred folder numbers were selected for each of the 5 years from the registration book using systematic sample, making a total of 2500 folders. The contents of 2100 folders were available for the study. Each folder of mother-baby pairs was subsequently screened using the inclusion and exclusion criteria. A total of 412 mother-baby pairs met the selection criteria. The Inclusion criteria included;The mother should have attended at least three ANC at the hospital after confirmation of pregnancy either by ultrasonography or Human Chorionic Gonadotrophic (HCG) detection method.Should have delivered a live singleton baby at the facility.\n The exclusion criteria included;Twin gestationsDeliveries of referral cases from other hospitals\n\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData obtained from the medical records were transferred unto a data capture form, the secondary data was later collated, scrutinized and analysed using IBM Statistical package for Social Sciences (SPSS) version 21. Pearson Chi-square test was used to analyze categorical variables. Relative risk and Odds ratio were used as a measure of the degree of association between an event (outcome) and comparable groups. Students T- test and One way analysis of variance (one-way Anova) were used to analyze the difference in mean values between a set of comparable groups or values. Bonferroni was used as the post hoc test after one- way Anova. A p-value less than 0.05 was considered statistically significant.\n\nBODY.METHODS.POTENTIAL CONFOUNDERS:\nThe study assumed that antibiotic prescribed was synonymous with exposure which is not always the case. Because of the rural nature of the setting the hospital dispensary dispenses most of the medications prescribed at the same facility. This reduces the inconvenience of patients moving to city centers for their prescribed medicines. The dose prescribed, the duration of therapy and the frequency of administration were not taken into consideration. The design of the study could not totally account for possible out-of- hospital antibiotic use. The choice of a rural community with few chemical shop outlets and no pharmacy was expected to reduce this potential confounder. The National Health Insurance Scheme runs a \"capitation policy\" where a patient chooses one primary health care facility for their needs. The policy also reduces patients moving from one hospital to another because if a patient visits another hospital they have to pay hospital charges. This reduces attrition and tendency for self- medication.\n\nBODY.RESULTS.SOCIODEMOGRAPHIC DATA:\nThe biodata recorded were mothers' age at birth, gravida, marital status, occupational status and religious affiliation. Mothers' age at birth ranged from 13 to 44 years with a mean age of 26.3 ± 6.40 years. Most of the pregnant women were between 20 and 30 years, married and average number of gravida of 3.07 ± 2.03. (Table 1).Table 1Socio-demographic characteristics of participantsFrequencyPercentMarital statusMarried30482.2Single6617.8Mothers age at birth≥195613.720–3025161.4≤3110224.9OccupationFarmer12634.3Trader7821.3Student4311.7Teacher82.2Seamstress277.4Hairdresser5113.9Nurse3.8Unemployed277.4Othersa\n41.0Religious affiliationChristian33391.2Muslim246.6Traditional82.2GRAVIDA1–321268.8≤ 49631.2Total412100.0\naOthers include sprayer, baker, secretary and army officer\n\n\nBODY.RESULTS.PREVALENCE OF ANTIBIOTIC EXPOSURE:\nAbout two out of three women (65.8%, n = 271) attending ANC clinic in the hospital were treated with an antibiotic at a point during pregnancy. There was no association between the socio-economic status of a woman, i.e., marital status, occupation, age etc. with the odds of being treated with antibiotics. However antibiotic use was associated multiple gravida and Cesarean section. The odds ratio for antibiotic exposure during Caesarean were 13.8 times higher (95% CI, 5.9–32.5, K = 55.47) than that for women who delivered vaginally. There was an association (p < 0.001) between antibiotic exposure with advance in stage of pregnancy (Fig. 1). Most exposures (79.0%) were third trimester exposures. In fact 42.4% all antibiotic treatments occurred within 24 h to delivery and about 84% of these women went through Caesarean section. First trimester exposures accounted for just 16.6%. Furthermore 4.40% women received antibiotics in all three trimesters. Interestingly, 5.5% went through gestation and Caesarian without receiving any antibiotics at all (Table 2).Fig. 1Antibiotic use during pregnancy during the different stages of pregnancy. 12 respondents received antibiotics at all stages of pregnancy. 141 did not received antibiotics at all. 13 received during the 1st and 2nd trimesters only. 47 received antibiotics during the 2nd and 3rd trimesters only. 14 respondents received during the 1st and 3rd trimesters only. 141 respondents were treated during the 3rd trimester\nTable 2Antibiotic exposure in pregnant women of different socio economic class and mode of deliveryAntibiotic exposureYesNo TotalStatisticsMarital statusMarried197(64.8%)107(35.2%)304\nP = 0.77,DF = 1,Single44(66.7%)22(33.3%)66k = 0.83,OR = 0.97Age of mother at birth≥ 1932(57.1%)24(43.9%)43\nP = 0.357, DF = 220–30167(66.5%)8(33.5%)238K = 2.058≤ 3169(67.6%)33(32.4%)96OccupationEmployed182(64.6%)105(37.4%)126\nP = 0.61, K = 10.06Student27(62.8%)16(37.2%)43Unemployed19(70.4%)8(29.6%)27ReligionChristian219(65.8%)114(34.2%)333\nP = 0.368, K = 1.99Muslim15(62.5%)9(37.5%)24DF = 2Traditional5(62.5%)3(37.5%)8GRAVIDA1–3117(60.9%75(39.1%)192\nP = 0.066, K = 3.38≤467(72.0%)26(28.0%)93DF = 1 OR = 0.83Method of deliveryC. SectioN104(94.5%)6(5.5%)110\nP < 0.001,K = 55.47Vaginal167(55.3%)135(44.7%)302DF = 1 OR =13.8Total\n271(65.8%)\n\n141(34.2%)\n\n412\n\n Prevalence of antibiotic use in pregnancy showed a steady increase of 54.8% in 2013 to a peak of 77.8% in 2015.\n\nBODY.RESULTS.CLASSES OF ANTIBIOTICS PRESCRIBED:\nBeta lactams antibiotics i.e. cephalosporins and penicillins accounted for more than 67% of antibiotics used whilst metronidazole was used in 24% of pregnant women. Quinolones and sulphonamide/trimethoprim represented about of 2.1% antibiotics prescribed. Cephalosporins, penicillins and metronidazole showed a steep rise in use from first trimester to third trimester (Fig. 2). Macrolide use diminished as pregnancy progressed. Use of quinolones and co-trimoxazole however declined to zero in third trimester. 60% of the patients were given antibiotic because of a urinary tract infection, 12% due to respiratory tract infections, 14.3% as premedication for caesarean section. Other minor reasons included gastroenteritis (5.5%), premature rapture of membranes (2.8%), pelvic inflammatory disease (1.7%) and unspecified indications (3.7%).Fig. 2Classes of antibiotic use and the trimester of exposure\n\n\nBODY.RESULTS.SAFETY AND APPROPRIATENESS IN PREGNANCY:\nThe FDA classifications for the drugs prescribed in the study fell into classes B, C and D with classes A and X antibiotic not prescribed. Most of the drugs fell into category B with 96.6% followed by C and D with 2.9% and 0.5% respectively. It was observed that some antibiotics were prescribed without due justification i.e. there was drug –diagnosis mismatch and this constituted 3.5% of all prescriptions issued.\n\nBODY.RESULTS.INTRAPARTUM ANTIBIOTIC EXPOSURE AND BIRTH OUTCOMES.ANTIBIOTIC EXPOSURE AND BIRTHWEIGHT:\nThe mean birthweight of babies who delivered at the facility was 2.96 ± 0.56 kg. Intrapartum antibiotic treatment did not affect mean birthweight. Similarly there was no statistically significant relationship between mode of delivery and birthweight. Generally, there was a direct relationship between mothers' socio-economic status and birthweight. Babies born of single mothers had lower birthweights compared to those born of married mothers (p < 0.001, F = 12.74) (Table 3).Table 3Effect intrapartum antibiotic exposure on weight at birthBirth WeightNMean ± SDMarital statusMarried298\n3.02 ± 0.56\n***\nSingle652.75 ± 0.53Age group≥1955\n2.75 ± 0.48\n***\n20–302422.95 ± 056≤311003.10 ± 0.57Occupational statusEmployed287\n3.00 ± 0.57\n*\nStudent432.75 ± 0.58Unemployed273.00 ± 0.47ReligionChristian3242.98 ± 0.57Muslim232.82 ± 0.48Traditional82.88 ± 0.52GRAVIDA1–3206\n2.87 ± 0.51\n***\n4 And Above953.08 ± 0.57Mode of deliveryCeasarian section1032.96 ± 0.59Vaginal delivery2972.96 ± 0.55Antibiotic exposureYes2612.96 ± 0.58No1392.97 ± 0.53Results represented as mean ± SD. * means p < 0.05, ***p < 0.001\n\n\nBODY.RESULTS.INTRAPARTUM ANTIBIOTIC EXPOSURE AND BIRTH OUTCOMES.ANTIBIOTIC EXPOSURE AND APGAR SCORES:\nThere was no statistically significant relationship between antibiotic prescription during pregnancy and mean Apgar scores (i.e. 8.242 ± 1.3 vs 8.32 ± 1.62 (p = 0.414, F = 1.59)). However when mothers were treated with antibiotics in less than 24 h to parturition, there was a statistically significant decrease in mean Apgar than those not treated with antibiotics i.e. (7.86 ± 1.72 vs 8.4 ± 1.30 p = 0.002, F = 13.65) after adjusting for the effects of the mode of delivery. Babies born naturally had mean Apgar scores statistically higher than those born by Caesarean section i.e. 8.42 ± 1.23 vs 7.86 ± 1.87(p = 0.003, F = 8.98). Younger mothers, unmarried women and mothers with less gravida had significantly higher mean Apgar scores compared with older (p = 0.046), married (p = 0.018) and higher gravida (p = 0.034) mothers respectively (Table 4).Table 4Effect intrapartum antibiotic exposure on mean APGAR scoreAPGAR Scores\nN\nMean ± SDMarital statusMarried282\n8.26 ± 1.37\n*\nSingle628.69 ± 0.90Age groups≤1953\n8.60 ± 1.10*\n20–302328.41 ± 1.28≥31938.10 ± 1.43Occupational statusEmployed2728.32 ± 1.32Student418.40 ± 1.25Unemployed258.48 ± 0.95ReligionChristian3118.33 ± 1.34Muslim218.43 ± 1.15Traditional19.00 ± 0.00GRAVIDA1–3200\n8.54 ± 1.22\n*\n≥4888.20 ± 1.27Mode of deliveryCeasarian section100\n8.03 ± 1.67\n**\nVaginal delivery2818.47 ± 1.11Antibiotic exposureYes2488.30 ± 1.30No1338.47 ± 1.26Perinatal antibiotic exposureYes104\n7.96 ± 1.70\n**\nNo2778.51 ± 1.07Results represented as mean ± SD. *means p < 0.05, **p < 0.01\n\n\nBODY.RESULTS.INTRAPARTUM ANTIBIOTIC EXPOSURE AND BIRTH OUTCOMES.ANTIBIOTIC EXPOSURE AND CONGENITAL BIRTH DEFECT:\nSix birth defects were seen in the study representing 1.5% of all deliveries i.e. Cleft lip, mega cephalous, cleft palate and webbed feet. Intrapartum antibiotic exposure irrespective of trimester of exposure was not associated with any significant risk of birth defect (RR = 1, P = 0.97). There was no statistically significant association between marital status (p = 0.94), mothers age at birth (p = 0.361), occupation (p = 0.630), parity (p = 0.94) and religion (p = 0.49) and birth defects. Females had relatively higher odds ratio of incidence of birth defects than males. i.e. 2.2% vs 1.0% (OR = 2.2, P = 0.343, 95% CI).\n\nBODY.DISCUSSION:\nThe study was design to ascertain the prevalence antibiotic prescription among pregnant women attending ANC. About 65.8% of pregnant women were exposed to an antibiotic at some point during pregnancy. This figure was higher than an earlier report of about by 59.9% in the eastern region of Ghana [10] as well as the WHO/International Network for the Rational Use of Drugs figures. Women who went through Caesarean section were more likely to receive antibiotics to control wound infection and endometritis [11, 12]. Risk of antibiotic exposure was highest in the last trimester but it corresponded with the increased incidence of respiratory tract infections, urinary tract infections and gastritis during third trimester. This is of a concern because acquisition of specific foetal immunity begins in the third trimester and it is highly dependent on microbiome which can be altered by antibiotics [9, 13]. Most of the antibiotic used are classified as safe in pregnancy by the FDA. Macrolides though safe, directly stimulate the motilin receptors inducing Gastro-intestinal discomfort which may be undesirable in pregnancy [14, 15]. Anatomically the uterus sits directly on the urinary bladder and so increase in weight of the uterus as pregnancy advances exerts a proportional pressure on the bladder and stomach leading to increased incidence of UTI, gastritis and heartburns [16, 17]. Babies exposed to intrapartum antibiotics had mean weights slightly lower than those unexposed. A study by Vidal et al., (2013) [8] reported a similar pattern and it was further observed that such babies quickly gained much more weight after delivery leading to obesity in childhood. This study also noted a strong association between mother's socio-economic factors and birthweights as has been reported elsewhere [18–20]. This may be due to economic stability, marital support etc [18, 21, 22]. Babies born to Mothers who received antibiotics less than 24 h to delivery or subsequently had caesarean birth had lower mean Apgar scores much even after adjusting for other confounding factors. This correlates with other reports in which babies born vaginally had much higher Apgar scores at 5 min compared to caesarean born [23]. Older mothers or women with higher gravida had babies with mean Apgar scores lower than younger mothers or women with less gravida. A total six birth defects (15 per 10,000 live births) were recorded in this study. This figure was similar to other prevalence birth defects reported other authors [24].\n\nBODY.CONCLUSION:\n65% of pregnant women received antibiotics at some stage during pregnancy. Antibiotic use did not affect mean birth weight, APGAR score or incidence of birth defects. However, antibiotic use less than 24 h to delivery was associated with a decrease in mean APGAR score.\n\n**Question:** Compared to no antibiotics what was the result of antibiotics in less than 24 h to parturition on mean Apgar?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
948
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Custodiol - N versus Custodiol: a prospective randomized double blind multicenter phase III Trial\n\n ABSTRACT:\n\n\nBODY.BACKGROUND/INTRODUCTION:\nHTK-Solution (Custodiol) is a well-established cardioplegic and organ preservation solution. We currently developed a novel HTK based solution Custodiol-N which includes iron chelators to reduce oxidative injury as well as L-arginine, to improve endothelial function.\n\nBODY.AIMS/OBJECTIVES:\nIn the present first-in-human study, Custodiol-N was compared with Custodiol in patients undergoing elective coronary bypass surgery.\n\nBODY.METHOD:\nThe study was designed as prospective randomized double blind non-inferiority trial. Primary end-point was area under the curve (AUC) of creatine kinase MB (CKMB) within the first 24 hours after surgery. Secondary endpoints included, peak CKMB and troponin-T and AUC of troponin-T release, cardiac index, cumulative catecholamine dose, ICU-stay and mortality. All values are given as mean ± SD, p < 0.05 was considered as statistically significant.\n\nBODY.RESULTS:\nEarly termination of the trial was performed per protocol as the primary non-inferiority end point was reached after inclusion of 101 patients. Patient characteristics, medical history, operation and cross-clamp times did not differ between the groups. CKMB AUC (878 ± 549 vs. 778 ± 439 h*U/l, non-inferiority p < 0.001) and Troponin-T AUC (12990 ± 8347 vs. 13498 ± 6513 h*pg/ml, non-inferiority p < 0.001) was similar in both groups. Although the trial was designed for non-inferiority, peak CKMB (52 ± 40 vs. 41 ± 30 U/l, superiority p < 0.002) was significantly lower in the Custodiol-N group. Cardiac index, catecholamines ICU-stay and mortality (1 death in the control group) was similar in both groups.\n\nBODY.DISCUSSION/CONCLUSION:\nThis study shows that Custodiol-N is safe and provides similar cardiac protection as the established HTK-Custodiol solution. The significantly reduced peak CKMB levels in the Custodiol-N group may implicate a beneficial effect on ischemia/reperfusion injury in the setting of coronary bypass surgery.\n\n**Question:** Compared to Custodiol what was the result of Custodiol-N on CKMB AUC and Troponin-T AUC?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
824
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Bending the legs without discomfort?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
825
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Lifting heavy objects without discomfort?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
829
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Walking long distances (>?20 min)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
959
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial of a lifestyle intervention in obese endometrial cancer survivors: quality of life outcomes and mediators of behavior change\n\n ABSTRACT.BACKGROUND:\nTo examine the effects of a 6 month lifestyle intervention on quality of life, depression, self-efficacy and eating behavior changes in overweight and obese endometrial cancer survivors.\n\nABSTRACT.METHODS:\nEarly stage endometrial cancer survivors were randomized to intervention (n = 23) or usual care (n = 22) groups. Chi-square, Student's t-test and repeated measures analysis of variance were used in intent-to-treat analyses. Outcomes were also examined according to weight loss.\n\nABSTRACT.RESULTS:\nMorbidly obese patients had significantly lower self-efficacy, specifically when feeling physical discomfort. There was a significant improvement for self-efficacy related to social pressure (p = .03) and restraint (p = .02) in the LI group. There was a significant difference for emotional well-being quality of life (p = .02), self-efficacy related to negative emotions (p < .01), food availability (p = .03), and physical discomfort (p = .01) in women who lost weight as compared to women who gained weight. Improvement in restraint was also reported in women who lost weight (p < .01).\n\nABSTRACT.CONCLUSION:\nThis pilot lifestyle intervention had no effect on quality of life or depression but did improve self-efficacy and some eating behaviors.\n\nABSTRACT.TRIAL REGISTRATION:\n; NCT00420979\n\nBODY.BACKGROUND:\nEndometrial cancer is the most common gynecologic cancer in the United States and obesity is the most significant risk factor for development of the disease [1]. A recent prospective study reported that 68% of women with early endometrial cancer were obese which is markedly increased compared to older reports [2-5]. Adding to this escalation is the increased rate of obesity in the female population (18.1%) [6]. When assessing obesity associated cancers, it appears that endometrial cancer patients are the most morbid as most have stage I disease yet are at significant risk for premature death [7]. While the impact of weight on cancer recurrence does not appear to be a factor in endometrial cancer, obese endometrial cancer survivors have a higher mortality from causes not related to cancer [8]. Endometrial cancer survivors have numerous co-morbidities related to their obesity which include hypertension (HTN), diabetes mellitus (DM), cardiovascular disease (CVD), osteoarthritis (OA) and pulmonary disease [5]. Improving medical co-morbidities through weight management in survivors may lead to improved overall quality of life and survival. Differences in quality of life (QOL) between obese and non-obese endometrial cancer survivors are related to physical health. In a prospective examination of QOL, general health status, and obesity in women with early stage endometrial cancer, [9] women with a body mass index (BMI) less than 30 (non-obese) had increased physical QOL scores. There was also an inverse relationship between the global physical health composite score and BMI; with higher BMI associated with a declining physical QOL score. A recent cross-sectional survey of Canadian endometrial cancer survivors revealed that those patients with morbid obesity had a QOL score three times lower than women with a normal BMI [10]. This suggests that if weight is decreased, survivors QOL may be improved. What is unknown is the effect a survivorship lifestyle intervention trial may have on QOL, psychological health and eating behavior. We have previously reported the effects of a six-month lifestyle intervention on weight loss, exercise behavior, and nutrient intake changes in overweight and obese endometrial cancer survivors [11]. We found that a lifestyle intervention program in obese endometrial cancer survivors is feasible and efficacious – resulting in sustained behavior change and weight loss over a one year period. The present query was conducted in order to examine the intervention's effect on QOL outcomes, depression, self-efficacy and eating behavior, possible mediators of behavior change. We hypothesized that a lifestyle intervention program would improve these outcomes in obese endometrial cancer survivors. We also conducted an exploratory analysis comparing women who lost weight with women who gained weight.\n\nBODY.METHODS.STUDY DESIGN & PATIENT RECRUITMENT:\nThe study was a prospective randomized controlled trial comparing a lifestyle intervention (LI) consisting of exercise and nutritional counseling with cognitive-behavior modification to usual care (UC) in overweight and obese endometrial cancer survivors. An invitation letter was sent to women included in the cancer registry from the Ireland Cancer Center at University Hospitals Case Medical Center diagnosed with stage I-II endometrial cancer from 2001–2004. The 6 month intervention was delivered to the LI group while the UC group received only an informational brochure outlining the benefits of proper nutrition and physical activity. Institutional review board approval was granted and informed consent and authorization to use protected health information (HIPAA) was obtained from all patients prior to study procedures. Feasibility, eligibility criteria, and details of the intervention have been published elsewhere [11]. Randomization assignment was stratified by BMI (<40 versus ≥ 40). Prior analysis revealed that demographics and clinical characteristics were equivalent between groups [11].\n\nBODY.METHODS.INTERVENTION PROGRAM:\nThe counseling protocol was administered by the study registered dietitian (RD), primary investigator (PI), and psychologist. It followed a stepwise, phased approach using strategies outlined by social cognitive theory, indicating that the optimal intervention for a major behavior change should focus on establishing short-term goals, and enabling the person to build self-efficacy [12-16]. The intervention covered topics related to nutrition and exercise and met weekly for six weeks, biweekly for one month, and monthly for 3 months. Participants were contacted by the RD either by phone, email, or newsletter every week that the group did not meet. The psychologist met with the group during 2 of the 11 sessions and topics included cognitive and behavioral self-management strategies for weight loss and stress management. The PI met with both LI and UC participants at 3, 6 and 12 months.\n\nBODY.METHODS.MEASURES:\nPatient demographic and clinical data was obtained at baseline and prior to randomization. QOL and self-efficacy were assessed at baseline and at 3, 6, and 12 months. Eating behavior and depression was assessed only at baseline and 12 months. QOL was measured by the Functional Assessment of Cancer Therapy-General (FACT-G), a valid and reliable questionnaire evaluating physical, functional, family-social, and emotional well-being domains [17,18]. A fatigue subscale (-F) and an endometrial symptom subscale (-En) were also used [19,20]. Functional status was measured with the SF-36, a comprehensive short-form survey designed to measure functional health and well-being [21]. Depression was measured by the Beck Depression Inventory (BDI) which is well-validated and frequencly used in lifestyle research studies [22]. Self-efficacy, an individual's judgement regarding his/her ability to perform certain behaviors, was measured with the Weight Efficacy Life-Style (WEL) questionnaire. This scale specifically evaluates self-efficacy judgments specific to eating behaviors in five situational factors: negative emotions, food availability, social pressure, physical discomfort, and positive activities [23,24]. This measure has evidenced adequate psychometric properties, including internal consistency coefficients ranging from 0.76 to 0.90 [23]. Eating behavior was measured by the Three-Factor Eating Questionnaire (TFEQ), a self-assessment questionnaire developed to measure cognitive and behavioral components of eating [25,26]. Responses are aggregated into three scales, cognitive restraint, disinhibition and hunger. Cognitive restraint is designed to measure dietary restraint, that is, control over food intake in order to influence body weight and body shape. Disinhibition measures episodes of loss of control over eating, while the hunger scale is concerned with subjective feelings of hunger and food cravings. Higher numbers reflect increased restraint, decreased tendency to overeat in the presence of disinhibitors (i.e. stress, mood, alcohol) and decreased perception of hunger. All three subscales have demonstrated high internal consistency and reliability [27,28]. Patient anthropometric data including weight, waist circumference, and BMI were measured at all time points manually by the study dietitian. BMI was computed as patient weight in kilograms divided by the square of their height in meters. Participants were categorized according to World Health Organization guidelines: < 18.5 (underweight), 18.5 to 24.9 (healthy weight), 25.0 to 29.9 (overweight), Class I – 30.0 to 34.9, Class II – 35.0–39.9, and Class III or morbid obesity ≥ 40.0 kg/m2 [29].\n\nBODY.METHODS.STATISTICAL ANALYSES:\nPatient demographic, clinical variables and baseline values were compared between groups and by BMI (25.0 – 39.9 versus ≥ 40) by use of independent samples t-test or chi-square test for proportions. Primary analysis used repeated measures analysis of variance (ANOVA) with the 3, 6 and 12 month data as outcomes and the appropriate baseline measurement as a covariate to test for the main effect of group (LI versus UC, intention-to-treat analyses) on QOL outcome measures (FACT-G; physical, functional, social, emotional well-being, fatigue and endometrial subscales) and self-efficacy (WEL; negative emotions, food availability, social pressure, positive activities, physical discomfort). Change (from baseline to 12 months) in eating behavior (TFEQ) and depression measures were compared using paired samples t-test for each group and independent samples t-test for the difference in change between groups. Missing data for participants who did not complete all study assessments was handled according to the last and next method or previous row mean method as recommended by Engels et al [30]. Imputation was done on between 15–19% of values for the various QOL and eating behavior measures. The percentage of missing data for these measures was less than that imputed for weight values as some patients opted to mail these surveys back as data were self-report. Data were also analyzed using last observation carried forward and completers only approaches. There were no substantive differences among the three approaches and thus we present results of the first approach only. QOL outcomes were also examined according to whether patients lost weight or their weight remained stable/gained over the course of the year as an ancillary stratified analyses. We were interested in examining if QOL outcomes differed between these two groups as there were some women in the UC group who lost weight and likewise women in the LI group who did not lose weight [31]. SPSS for Windows (version 14.0) was used for statistical analyses (SPSS Inc., Chicago, IL). Approximately 25 patients per group were needed to provide 80% power to detect a difference between groups in mean weight change from baseline to twelve months of 5 kg (11 pounds) or greater, representing approximately 5% weight loss for an obese female (alpha = 0.05, two-sided, SD = 5.0) [32]. Five percent weight change is considered clinically relevant and a recommended goal for weight loss over 6 months [33,34]. Weight change was the primary endpoint of this feasibility trial and the study was powered based on this endpoint. In addition, 25 patients/group were needed to detect a large standardized effect size (d = 0.80) with power of 0.80 and a two-tailed alpha of 0.05. A large standardized effect size on the FACT-G is approximately 10 points based on a standard deviation of 12 and exceeds the 7 point MID identified for this scale [18,35].\n\nBODY.RESULTS:\nForty-five patients were enrolled; 23 were randomized to the LI group and 22 to the UC group. In summary, most patients were Caucasian with an average age of 55 years. As reported previously, at 12 months, the LI group lost 3.5 kg compared to a 1.4 kg gain in the UC group (p = .02) and increased their exercise by 16.4 metabolic equivalents (METS) compared to a decrease of 1.3 METS in the UC group (p < .001) [11]. Average time since diagnosis was 2 years and mean BMI was 42.3 kg/m2 [11]. Fifty-three percent (24/45) of participants reported being very overweight during the previous ten years and 15/45 (33%) reported being moderately overweight. Patients with a BMI > 40 had increased abdominal obesity with a mean (SD) waist circumference of 125.6 (15.9) cm as compared to those with a BMI < 40 (mean 98.5 cm (8.2), p < 0.01). Waist/hip ratio did not differ between morbidly obese and BMI < 40 patients (mean 0.84 (SD = .07). Co-morbidities related to obesity (hypertension, diabetes, CVD, arthritis and metabolic syndrome) were common in these patients [11]. Five patients had prior bariatric surgery prior to enrollment (LI; 4, UC: 1) with surgeries performed 3–6 years before the study began.\n\nBODY.RESULTS.BASELINE DIFFERENCES BASED ON BODY MASS INDEX:\nBaseline QOL (FACT-G, SF-36), BDI, WEL and TFEQ scores did not differ between the LI and UC groups. However significant differences were observed when patients were categorized according to BMI (Table 1). Total QOL (FACT-G) was not significantly lower in morbidly obese women [BMI ≥ 40: 78.0 (SD = 14.4) vs. BMI < 40: 83.7 (SD = 11.4); p = 0.14], however the difference was consistent with minimally important differences (5–6 points) [18]. SF-36 physical composite score was decreased in morbidly obese women (p = 0.04). At baseline 7/45 (15%) patients reported mild depressive symptoms, and 4/45 (9%) reported moderate depressive symptomatology. Beck depression score did not differ according to BMI (Table 1). Table 1 Quality of life, depression, self-efficacy and eating inventory scores at baseline (n = 45) Variables Overall (n = 45) Body mass index < 40 (n = 23) Body mass index ≥ 40 (n = 22) p value * Quality of Life FACT-G (0–108) 80.1 (12.1) 83.7 (9.6) 78.0 (14.1) 0.14 Physical well-being (0–28) 23.9 (3.2) 24.8 (3.2) 23.2 (3.3) 0.12 Social well-being (0–24) 16.5 (3.1) 16.6 (2.6) 16.5 (3.2) 0.90 Emotional well-being (0–24) 18.6 (4.1) 19.4 (3.9) 17.9 (4.4) 0.27 Functional well-being (0–28) 21.2 (5.4) 22.9 (4.0) 20.4 (5.4) 0.10 Endometrial (0–64) 53.5 (5.8) 52.8 (5.0) 54.6 (6.6) 0.32 Fatigue (0–52) 37.9 (9.7) 40.2 (9.7) 36.2 (9.8) 0.19 Physical composite score, SF36 43.9 (10.7) 47.1 (8.7) 40.6 (11.7) 0.04 Mental composite score, SF36 50.0 (10.9) 50.6 (9.8) 49.3 (12.2) 0.70 Beck depression inventory (0–63) 10.3 (7.9) 8.8 (6.5) 11.4 (9.0) 0.29 Weight self-efficacy Negative Emotions (0–36) 18.8 (10.8) 20.4 (9.7) 15.5 (10.4) 0.12 Food Availability (0–36) 18.3 (9.3) 20.0 (8.8) 15.4 (8.9) 0.09 Social Pressure (0–36) 24.2 (8.7) 25.0 (7.2) 22.3 (9.7) 0.31 Physical Discomfort (0–36) 26.5 (8.3) 28.8 (5.8) 23.4 (9.4) 0.03 Positive Activities (0–36) 24.3 (7.3) 25.5 (5.0) 22.3 (8.5) 0.15 Total (0–180) 112.1 (36.8) 119.6 (26.7) 98.8 (39.0) 0.05 Three-factor eating questionnaire Restraint (0–21) 10.3 (4.9) 12.8 (4.4) 7.5 (3.8) < 0.01 Disinhibition (0–16) 7.9 (3.6) 7.3 (3.8) 9.2 (3.0) 0.10 Perceived Hunger (0–14) 5.6 (3.4) 5.6 (3.4) 6.3 (3.6) 0.50 * Independent samples Student's t-test. Data are presented as the mean (standard deviation) FACT-G, Functional Assessment of Cancer Therapy-General; SF-36, Short-form Medical Outcomes Survey In regards to eating patterns, morbidly obese patients had significantly lower self-efficacy when feeling physical discomfort and decreased total self-efficacy (WEL) score. Restraint on the TFEQ questionnaire was decreased in patients with BMI > 40 as compared to BMI < 40.\n\nBODY.RESULTS.EFFECTS OF THE LIFESTYLE INTERVENTION (INTENTION-TO-TREAT ANALYSIS):\nRepeated measures ANOVA with baseline measurements as a covariate and 3, 6 and 12 months as outcomes revealed no group (LI versus UC) effects for QOL outcomes (Table 2). There was a significant group effect for self-efficacy related to social pressure and restraint improved in the LI group. The UC group had a significant change in depression from baseline. Change scores for depression or TFEQ, compared by independent samples t-test, however did not differ by group. Table 2 Effects of the lifestyle intervention on QOL and clinical outcomes (intention-to-treat analysis) Variables Baseline 3 mos 6 mos 12 mos Group effect P* Primary Quality of life outcome FACT-G (0–108) 0.37 Lifestyle intervention 80.6 (12.7) 81.1 (14.0) 82.4 (14.5) 80.1 (14.2) Usual care 79.5 (13.1) 80.1 (15.5) 81.8 (13.2) 82.2 (14.9) Secondary QOL outcomes Physical well-being (0–28) 0.84 LI 23.9 (3.4) 23.6 (4.3) 24.0 (4.4) 23.3 (4.3) UC 23.9 (3.0) 23.3 (4.5) 24.2 (4.1) 23.7 (4.1) Functional well-being (0–28) LI 21.3 (5.2) 22.0 (5.6) 22.0 (5.7) 21.2 (6.2) 0.44 UC 21.0 (5.7) 22.0 95.6) 22.0 (5.0) 22.0 (6.1) Emotional well-being (0–24) LI 18.3 (4.4) 19.1 (4.5) 19.3 (4.5) 18.8 (4.5) 0.98 UC 18.9 (3.9) 19.2 (4.0) 19.5 (4.1) 20.0 (3.7) Social/family well-being (0–28) LI 17.2 (2.7) 16.6 (3.0) 17.2 (2.7) 16.9 (2.6) 0.17 UC 15.7 (3.3) 15.6 (3.4) 16.3 (3.0) 16.6 (3.4) Fatigue (0–52) LI 37.5 (10.3) 38.5 (11.3) 39.2 (11.6) 38.5 (11.1) 0.99 UC 38.2 (9.2) 39.0 (10.4) 39.2 (11.5) 40.0 (9.3) Endometrial subscale (0–64) LI 54.4 (5.4) 54.9 (6.7) 55.5 (6.8) 55.0 (6.7) 0.46 UC 52.3 (6.2) 51.9 (6.9) 52.6 (6.9) 53.6 (6.2) Weight self-efficacy Negative emotion (0–36) 0.87 LI 17.0 (10.4) 21.7 (9.8) 23.1 (9.7) 21.9 (9.1) UC 19.9 (10.8) 24.9 (9.9) 22.8 (10.0) 23.9 (10.9) Food Availability (0–36) 0.49 LI 17.5 (8.5) 23.1 (8.9) 23.2 (9.9) 22.4 (10.1) UC 18.8 (10.3) 23.6 (9.8) 21.0 (9.5) 23.2 (9.9) Social pressure (0–36) 0.03 LI 23.9 (8.1) 28.2 (7.0) 28.4 (7.5) 27.9 (7.9) UC 23.9 (9.3) 26.1 (8.5) 23.3 (8.0) 26.0 (7.3) Positive activities (0–36) LI 23.2 (5.7) 27.5 (4.8) 27.8 (4.2) 27.3 (4.6) 0.10 UC 25.1 (8.5) 27.5 (8.1) 24.9 (7.7) 27.9 (5.0) Physical discomfort (0–36) 0.38 LI 25.8 (8.4) 28.5 (7.1) 29.1 (6.6) 29.7 (6.7) UC 26.8 (8.2) 29.1 (6.5) 27.6 (6.7) 29.0 (6.4) Total (0–180) 0.19 LI 107.4 (31.5) 129.1 (30.4) 131.5 (31.7) 129.3 (31.3) UC 114.5 (40.9) 131.1 (39.4) 119.6 (37.7) 130.0 (35.6) Beck depression inventory LI 9.7 (6.8) - - 8.6 (7.2) 0.16 UC 10.6 (8.9) - - 8.3 (7.4) 0.02 Three-factor eating questionnaire Restraint (0–21) - - LI 10.0 (4.8) - - 11.9 (5.1) 0.02 UC 10.3 (5.2) 11.1 (4.9) 0.24 Disinhibition (0–16) - - LI 8.8 (2.8) - - 8.4 (3.3) 0.44 UC 7.1 (4.1) 7.0 (4.1) 0.68 Hunger (0–14) - - LI 6.2 (3.3) 6.0 (2.7) 0.67 UC 5.1 (3.6) 4.2 (2.6) 0.13 Data are presented as the mean (standard deviation). LI (n = 23), UC (n = 22) * p value for repeated measures ANOVA with baseline measurement as covariate and 3, 6 and 12 months as outcomes or paired samples t-test (eating inventory and Beck depression inventory) Abbreviations: QoL, quality of life; FACT-G, Functional Assessment of Cancer Therapy-General; SF-36, Short-form Medical Outcomes Survey; BDI, Beck Depression Inventory; WEL, weight efficacy lifestyle; TFEQ, three-factor eating questionnaire; BMI, body mass index \n\nBODY.RESULTS.ASSOCIATION BETWEEN WEIGHT LOSS AND OUTCOMES (EXPLORATORY ANALYSIS):\nTable 3 presents an ancillary analysis for QOL, self-efficacy, depression and eating behavior outcomes. For this analysis we compared women who lost weight (WL; n = 21) versus those whose weight was the same or who gained weight (WG; n = 24) from baseline to twelve months. There was a significant effect (WL versus WG) for emotional well-being QOL, self-efficacy related to negative emotions, food availability, and physical discomfort as the WL group had higher scores. For self-efficacy related to negative emotions, there was a mean increase of 8.9 in women who lost weight versus 0.6 in those whose weight was stable/gained. Similarly, for self-efficacy related to food availability and physical discomfort, the weight loss group had an increase of 7.9 and 5.3 versus 1.9 and 1.0 in women who did not lose weight. Women who lost weight had improvement in depression (2.3 versus 1.2) and TFEQ restraint score (2.8 versus 0.1). Women who lost weight also had a lower disinhibition score at 12 months. Group differences for change in TFEQ restraint score were observed using independent samples t-test for comparison of change scores between WL and WG (p = .01), but not for depression. Table 3 Effects of weight loss on QOL and clinical outcomes (ancillary analyses) Variables Baseline 3 mos 6 mos 12 mos Group effect P* Primary QoL outcome FACT-G 0.26 WL 81.4 (14.0) 83.5 (15.7) 84.8 (14.7) 82.3 (15.4) WG 78.9 (11.7) 78.1 (13.3) 79.8 (12.6) 80.1 (13.8) Secondary QoL outcomes Physical well-being (0–28) 0.79 WL 24.7 (3.3) 24.5 (4.6) 24.7 (4.7) 24.2 (4.8) WG 23.1 (3.0) 22.5 (4.0) 23.5 (3.7) 22.8 (3.5) Functional well-being (0–28) WL 21.9 (5.6) 22.8 (5.3) 23.3 (5.0) 21.9 (6.1) 0.65 WG 20.5 (5.2) 21.2 (5.8) 20.8 (5.4) 21.2 (6.2) Emotional well-being (0–24) WL 18.2 (4.3) 20.0 (4.1) 20.0 (4.3) 19.5 (4.2) 0.02 WG 18.9 (4.0) 18.4 (4.2) 18.9 (4.2) 19.3 (4.1) Social/family well-being (0–28) WL 16.6 (3.2) 16.2 (3.4) 17.0 (2.8) 16.7 (2.8) 0.98 WG 16.4 (3.0) 16.0 (3.0) 16.5 (2.9) 16.8 (3.2) Fatigue (0–52) WL 39.0 (10.2) 41.6 (10.5) 41.4 (12.1) 42.0 (10.5) 0.07 WG 36.9 (9.3) 36.3 (10.5) 37.3 (10.7) 36.9 (9.5) Endometrial subscale (0–64) WL 54.3 (6.6) 54.3 (7.7) 55.2 (8.1) 55.5 (6.9) 0.55 WG 52.8 (5.1) 52.7 (6.2) 53.1 (5.7) 53.3 (5.9) WEL Negative emotions (0–36) < 0.01 WL 16.7 (10.9) 24.3 (10.5) 25.6 (9.3) 25.6 (9.7) WG 20.0 (10.3) 22.4 (9.5) 20.5 (3.3) 20.6 (9.8) Food Availability (0–36) 0.03 WL 16.5 (10.4) 24.0 (9.6) 23.8 (9.9) 24.4 (10.4) WG 19.5 (8.3) 22.8 (9.0) 20.7 (9.4) 21.4 (9.4) Social pressure (0–36) 0.20 WL 25.2 (9.2) 28.4 (7.5) 28.1 (7.5) 29.1 (7.2) WG 22.7 (8.1) 26.1 (8.0) 23.9 (8.2) 25.2 (7.6) Positive activities (0–36) 0.10 WL 23.0 (7.9) 27.5 (6.2) 27.5 (5.3) 27.8 (5.0) WG 25.1 (6.5) 27.5 (6.9) 25.4 (7.0) 27.4 (4.6) Physical discomfort (0–36) WL 25.5 (9.2) 29.6 (7.0) 30.0 (6.9) 30.8 (7.1) 0.01 WG 27.0 (7.4) 28.1 (6.5) 26.9 (6.2) 28.0 (5.7) Total (0–180) <0.01 WL 106.9 (37.4) 133.8 (36.0) 135.0 (33.8) 137.6 (33.9) WG 114.3 (35.5) 126.9 (33.9) 117.4 (34.4) 122.6 (31.4) Beck depression inventory WL 9.3 (7.0) - - 7.0 (6.5) 0.03 WG 10.9 (8.8) - - 9.7 (7.9) 0.14 TFEQ Restraint (0–21) WL 10.1 (4.5) - - 12.9 (4.2) <0.01 WG 10.2 (5.4) - - 10.3 (5.3) 0.74 Disinhibition (0–16) WL 7.8 (3.0) - - 6.9 (3.3) 0.05 WG 8.1 (4.1) - - 8.4 (4.1) 0.48 Hunger (0–14) WL 6.1 (3.8) - - 5.1 (2.5) 0.21 WG 5.3 (3.1) - - 5.1 (3.1) 0.51 Data are presented as the mean (standard deviation). WL, n = 21; WG, n = 24 * p value for repeated measures ANOVA with baseline measurement as covariate and 3, 6 and 12 months as outcomes or paired samples t-test (eating inventory and Beck depression inventory) Abbreviations: QoL, quality of life; FACT-G, Functional Assessment of Cancer Therapy-General; SF-36, Short-form Medical Outcomes Survey; BDI, Beck Depression Inventory; WEL, weight efficacy lifestyle; TFEQ, three-factor eating questionnaire; BMI, body mass index \n\nBODY.DISCUSSION:\nLifestyle interventions are necessary for survivorship in obese endometrial cancer patients as they are at risk for premature death not due to cancer but secondary to poor cardiovascular health. The majority of patients in this study were morbidly obese and had lower physical health-related scores. The lifestyle intervention did not have any effect on global QOL outcomes, however self-efficacy, emotional well-being and certain eating behaviors improved with weight loss. Although the intervention was efficacious in promoting weight loss, the lack of influence on QOL was contrary to the hypothesis. It may be that a longer intervention, greater weight loss or increased exercise is needed to improve QOL. Functional, and psychological-related changes may require a longer term investment in a healthy lifestyle before reaching significance. Social cognitive theory [36] is a well-established explanation of health behavior change, and is frequently utilized in dietary and physical activity lifestyle interventions [37-39]. Social cognitive theory explains health behaviors in terms of reciprocal relationships between behavior, personal factors and environmental influences. The power of circumstance, of being diagnosed with endometrial cancer, can potentially launch new life courses, change a person's perception of their environment, improve health behavior-related expectations and influence reciprocal determinism. Thus, endometrial cancer patients may be amenable to a \"teachable moment.\" Self-efficacy is the social cognitive theory concept that represents one's judgment about her ability to successfully engage in a particular behavior and overcome barriers to achieve change [36]. Toobert et al performed a lifestyle intervention trial in women with CVD and found that fat intake decreased as their self-efficacy scores increased [40]. This research and those of our collaborators supports self-efficacy as a theoretical model and a possible mediator to improve lifestyle change in the obese [41,42]. We found increased self-efficacy as related to negative emotions, food availibiltiy and physical discomfort in those women who lost weight during the year. In addition, self-efficacy scores at twelve months remained increased, six months after the intervention had concluded. In terms of eating behavior, restraint was also improved in patients who lost weight. Weight losers, however had a lower disinhibition score, indicating an increase likelihood to overeat in the presence of disinhibitors. This was an unexpected finding and may indicate that there are still certain triggers that are evident and more attention to these is possibly needed. Different populations including endometrial cancer survivors, may need more intense interventions in order to change morbid patterns [24]. Limitations of this study include its small sample size, and lack of racial heterogeneity. There was a greater number of patients in the intervention group who had a prior history of bariatric surgery. These patients may have better self-efficacy and eating behavior patterns that could influence results though the number of patients is too small to make any conclusions. However, the findings should trigger clinicians to focus on positive lifestyle change, improving self-efficacy and decreasing co-morbidity in endometrial cancer survivors. An additional limitation was potential contamination of the control group, as depression improved in the control group over time. This may have been influenced by all study patients participating in an orientation meeting, in addition to meeting with the principal investigator at the 3 measurement time points. It may be that simple increased physician contact (without teaching) can improve outcome measures. Others have questioned whether improving QOL could be attributable to the increased attention (or increasing self-efficacy) given to cancer patients involved in exercise interventions [43]. QOL may improve temporally as patients experience more years of survivorship and travel farther from the challenging time of diagnosis. In addition, we can also hypothesize that the weight loss observed was not large enough to see a change in QOL. However, given that this was a feasibility study the results are helpful in designing a future intervention trial.\n\nBODY.CONCLUSION:\nThis pilot lifestyle intervention had no effect on quality of life, or depression but did improve self-efficacy and restraint. A substantial amount of endometrial cancer survivors are surviving their cancer, however, they are succumbing to other diseases correlated with obesity. The Gynecologic Oncology Group is considering adding QOL measures to their next large prospective endometrial cancer trial with long term follow up. Future directions will also consist of the measurement of 5 year outcomes in this study population [44]. Goals of upcoming projects are to decrease co-morbidity and increase overall survival in endometrial cancer survivors.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nVVG, HG, MBK, JJ, EL and KC conceived of the study, and participated in its design and coordination. VVG, HG, and MBK implemented the study and were responsible for day to day conduct of the study. VVG, HG, and KC analyzed the data. VVG, HG, and KC drafted the manuscript; MBK, JJ and EL provided critical review. All authors read and approved the final manuscript.\n\n**Question:** Compared to usual care what was the result of lifestyle intervention (LI) consisting of exercise and nutritional counseling with cognitive-behavior modification on self-efficacy related to social pressure and restraint in overweight and obese endometrial cancer survivors.?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
833
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Looking after children?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
966
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Absorption, Metabolism and Excretion of Cranberry (Poly)phenols in Humans: A Dose Response Study and Assessment of Inter-Individual Variability\n\n ABSTRACT:\nThe beneficial health effects of cranberries have been attributed to their (poly)phenol content. Recent studies have investigated the absorption, metabolism and excretion of cranberry (poly)phenols; however, little is known about whether they follow a dose response in vivo at different levels of intake. An acute double-blind randomized controlled trial in 10 healthy men with cranberry juices containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols was performed. Blood and urine were analyzed by UPLC-Q-TOF-MS. Sixty metabolites were identified in plasma and urine including cinnamic acids, dihydrocinnamic, flavonols, benzoic acids, phenylacetic acids, benzaldehydes, valerolactones, hippuric acids, catechols, and pyrogallols. Total plasma, but not excreted urinary (poly)phenol metabolites, exhibited a linear dose response (r2 = 0.74, p < 0.05), driven by caffeic acid 4-O-ß-d-glucuronide, quercetin-3-O-ß-d-glucuronide, ferulic acid 4-O-ß-d-glucuronide, 2,5-dihydroxybenzoic acid, 2,4-dihydroxybenzoic acid, ferulic acid, caffeic acid 3-O-ß-d-glucuronide, sinapic acid, ferulic acid 4-O-sulfate, 3-hydroxybenzoic acid, syringic acid, vanillic acid-4-O-sulfate, (4R)-5-(3′-hydroxyphenyl)-γ-valerolactone-4′-O-sulfate, 4-methylgallic acid-3-O-sulfate, and isoferulic acid 3-O-sulfate (all r2 ≥ 0.89, p < 0.05). Inter-individual variability of the plasma metabolite concentration was broad and dependent on the metabolite. Herein, we show that specific plasma (poly)phenol metabolites are linearly related to the amount of (poly)phenols consumed in cranberry juice. The large inter-individual variation in metabolite profile may be due to variations in the gut microbiome.\n\nBODY.1. INTRODUCTION:\nSeveral clinical trials have showed the beneficial effects of cranberry intake on human health [1,2,3,4,5,6,7]. However, while in vitro studies have shown dose-response effects of cranberry (poly)phenols on the inhibition of microbial invasion of gut and bladder epithelial cells [8,9] and in the cytotoxicity of ovarian cancer cells [10], very few human dose-response studies have been conducted. Howell et al. reported an ex vivo dose-dependent effect on E. coli anti-adhesion in urine after cranberry consumption [11], and more recently, cranberry powder intake has demonstrated a dose-dependent effect on reducing urinary tract symptoms in men [12]. However, dose-response studies with cranberry interventions that report improvements in human health not related to the urinary tract are scarce. We recently reported a dose-dependent enhancement in endothelial function after cranberry juice intake [13], while other works did not reveal any improvement in endothelial function or blood pressure after different doses of quercetin-3-O-glucoside, a flavonoid present in cranberries, although an increase in quercetin circulating metabolites was reported [14]. To the best of our knowledge, few works have investigated plasma (poly)phenol dose-response profiles after intake of (poly)phenol-rich foods. A statistically significant increase was seen in the maximum concentration (Cmax) and area under the curve (AUC) of pelargonidin-glucuronide, pelargonidin-3-O-glucoside and cyanidin-3-O-glucoside after increasing amounts of strawberry powder, which were correlated with insulin response and inflammatory markers [15]. However, not all (poly)phenols seem to follow a linear dose response. After green tea consumption, the sum of catechins detected in plasma did not follow a linear dose response, despite the fact that (-)-epicatechin and (-)-epigallocatechin gallate increased their plasma concentration proportionally with the dosage intake [16]. These results suggested an apparent saturation mechanism which was also reported for some plasma (poly)phenols after wild blueberry consumption [17]. We have previously reported that improvements in endothelial function after cranberry consumption correlated with plasma levels of phenolic metabolites, mostly deriving from phase II metabolism [13]. In this work, we evaluate whether the plasma and urine levels of 60 phenolic metabolites found after consumption of cranberry juice with five different (poly)phenol concentrations follow a dose response and we calculate the extent of their inter-individual variability.\n\nBODY.2. MATERIALS AND METHODS .2.1. CHEMICALS:\nSulfates and glucuronides of (poly)phenol standards were obtained from Toronto Research Chemicals (Toronto, ON, Canada). Kaempferol-3-O-ß-d-glucuronide was obtained from Extrasynthese (Genay, France). The 1-methylpyrogallol-O-sulfate, 2-methylpyrogallol-O-sulfate, 4-methylcatechol-O-sulfate, 4-methylgallic-3-O-sulfate, catechol-O-sulfate, pyrogallol-O-1-sulfate, pyrogallol-O-2-sulfate and vanillic acid-4-O-sulfate were kindly provided by Dr Claudia Nunes dos Santos and Dr Rita Ventura, and their synthesis has been described elsewhere [18]. The 2-, 3- and 4-hydroxyhippuric acids were purchased from Enamine (Kiev, Ukraine). Remaining compounds were obtained from Sigma-Aldrich Co. (Steinheim, Germany). Acetic acid was from Carl Roth (Karlsruhe, Germany) and Oasis HLB μElution plates (2 mg sorbent per well, 30 μm) were from Waters (Eschborn, Germany). Milli-Q system (Merck KGaA, Darmstadt, Germany) ultrapure water was used. Unless otherwise stated, all chemicals and reagents were obtained from Sigma-Aldrich Co. (Steinheim, Germany).\n\nBODY.2. MATERIALS AND METHODS .2.2. HUMAN STUDY :\nThe study protocol has been described previously [13]. In brief, ten healthy young individuals with ages between 18 and 35 years old participated in a cross-over randomized double-blinded control-matched clinical trial which was registered under the NIH [19] ClinicalTrials.gov website (NCT02517775). All procedures involving human subjects were approved by the University of Düsseldorf Research Ethics Committee (ref: 14-012) and all procedures were performed following the Declaration of Helsinki guidelines. Volunteers were instructed to follow a low-polyphenol diet and exclude berry intake for 24 h prior to each visit. Volunteers were fasted for at least 12 h prior to each intervention to reduce background diet interference as much as possible. During the study day, the same low (poly)phenol meal was giving to all volunteers together with the test drink, and no other food or drink was allowed until after 8 h post-consumption, except for water ad libitum. Participants consumed cranberry juice containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols (TP) in a random order with 1 week wash out. On the study day whole blood was collected at 0, 1, 2, 4, 6, 8 and 24 h and urine was collected 0–8 and 8–24 h after intake. Plasma was obtained by centrifuging whole-blood collected in EDTA-containing vacutainers for 15 min at 1800 g for at 4 °C; plasma samples were spiked with 2% formic acid. Ascorbic acid was added to the urine containers (3.75 g/2 L) and acidification with formic acid until pH 2.5 was achieved. Urine containers were kept in opaque cool bags with ice blocks. Plasma and urine aliquots were stored at −80 °C until analysis.\n\nBODY.2. MATERIALS AND METHODS .2.3. (POLY)PHENOL ANALYSIS IN PLASMA:\nPlasma was analyzed with a previous validated μ-SPE coupled with UPLC-Q-TOF-MS that allowed for high-throughput [20]. Briefly, 600 μL of plasma were centrifuged at 15,000× g for 15 min at 4 °C and 350 μL of the supernatant was diluted (1:1) with phosphoric acid 4% and spiked with taxifolin (50 nM) as an internal standard. 600 μL were loaded on a 96 well μ-SPE HLB plate, washed with 200 μL of water and 200 μL of 0.2% acetic acid and finally eluted with 60 μL of methanol. Extracted and concentrated plasma samples were analyzed with an Agilent 6550 iFunnel Accurate-Mass Quadrupole Time-of-Flight Mass Spectrometer (Q-TOF MS) (Agilent, Waldbronn, Germany) after separation on a 1290 Infinity UPLC system (Agilent, Waldbronn, Germany) using a Zorbax Eclipse Plus RRHD column 2.1 mm × 50 mm, 1.8 μm (Agilent, Waldbronn, Germany). The mobile phase consisted of 0.1% HCOOH (solvent A) and acetonitrile with 0.1% HCOOH (solvent B). The elution profile (flow rate of 0.4 mL/min) started at 1% solvent B and increased to 10% after 5 min, to 25% at 8 min and to 99% at 9.1 min. The percentage of solvent B was held constant for 0.9 min. The Q-TO-MS parameters were as follows: Negative mode, gas temperature 150 °C, gas flow 20 L/min, nebulizer 25 psig, sheath gas temperature 350 °C, sheath gas flow 12 L/min and Vcap 3000 V. Data were analyzed and processed using Mass Hunter Workstation Quantitative and Qualitative Analysis software (version B.06.00, Agilent, Waldbronn, Germany).\n\nBODY.2. MATERIALS AND METHODS .2.4. DATA TREATMENT AND STATISTICAL ANALYSIS:\nThe maximum plasma concentration (Cmax) and time needed to reach it (Tmax) and AUC of time vs. concentration (calculated using a trapezium method) were calculated using the PKSolver add-in software for Microsoft Excel [21]. Linear regression analysis was performed between (poly)phenol amount from each juice and the average AUC, giving a regression equation and r2 values. All statistical analysis was performed using GraphPad Prism V6. Significance was defined as p < 0.05.\n\nBODY.3. RESULTS.3.1. PLASMA KINETICS OF CRANBERRY (POLY)PHENOLS:\nA total of 60 compounds in plasma were identified and quantified post-consumption of cranberry juice containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols, as we have previously reported for the juice containing 787 mg of (poly)phenols [22]. These included cinnamic acids, dihydrocinnamic, flavonols, benzoic acids, phenylacetic acids, benzaldehydes, valerolactones, hippuric acids, catechols, and pyrogallols. Full details of the plasma kinetic profiles can be found in Supplementary Tables S1 and S2. In summary, plasma concentrations varied from low nM to mid μM. The maximum concentration in plasma (Cmax) ranged from 1–31,269 nM (after 409 mg TP), 1–263,152 nM (after 787 mg TP), 2–115,012 nM (after 1238 mg TP), 12–267,355 nM (after 1534 mg TP) and 5–390,606 nM (after 1910 mg TP). The highest Cmax for juices was achieved for 3-(4-hydroxyphenyl) propionic acid, followed by hippuric acid and catechol-O-sulfate. The time to reach Cmax (Tmax) varied depending on the metabolite and depending on the amount of (poly)phenols in the juice (Supplementary Figure S1). The level of TP in cranberry juice seemed to impact the Tmax of individual metabolites, for example the Tmax for 3-(4-hydroxyphenyl) propionic acid varied between 7.2 h (409 mg TP juice) and 13.8 h (787 mg TP juice). The AUC of the plasma concentration against the time plot was calculated for each metabolite between 0 and 24 h and was between 2–3,119,972 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid at 409 mg TP), 1–2,274,346 nM*h (caffeic acid and hippuric acid at 787 mg TP), 2–1,609,708 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid at 1238 mg TP), 1–390,606 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid 1534 mg TP) and 5–2,467,256 nM*h (chlorogenic acid and 3-(4-hydroxyphenyl) propionic acid at 1910 mg TP). The compounds with a higher AUC for each dose were hippuric acid > catechol-O-sulfate > 2,3,dihydrobenzoic acid > phenylacetic acid.\n\nBODY.3. RESULTS.3.2. PLASMA DOSE RESPONSE :\nThe linear dose response for each plasma metabolite was assessed and of the 60 identified metabolites, 14 (poly)phenol metabolites displayed a linear dose-response curve (r2 ≥ 0.89) in plasma (Figure 1) with slopes significantly different from zero (p < 0.05): caffeic acid 4-O-ß-d-glucuronide and quercetin-3-O-ß-d-glucuronide displayed r2 > 0.98, while ferulic acid 4-O-ß-d-glucuronide, 2,5-dihydroxybenzoic acid, 2,4-dihydroxybenzoic acid, ferulic acid, caffeic acid 3-O-ß-d-glucuronide, sinapic acid, ferulic acid 4-O-sulfate, 3-hydroxybenzoic acid and syringic acid exhibited r2 > 0.90. Vanillic acid-4-O-sulfate, (4R)-5-(3′-hydroxyphenyl)-γ-valerolactone-4′-O-sulfate, 4-methylgallic acid-3-O-sulfate and isoferulic acid 3-O-sulfate had r2 values of almost 0.90. The sum of the (poly)phenol metabolites also exhibited a modest but significant (p > 0.05) correlation of r2 = 0.74 when plotted vs. (poly)phenol content in the interventions. Linear regression analysis was performed between the TP level in the intervention and the AUC of each class of compounds. The AUC for flavonols produced the highest r2 (0.97, p < 0.001), followed by the valerolactones (r2 = 0.89, p < 0.005), benzoic acids (r2 = 0.85, p < 0.01), cinnamic acids (r2 = 0.72, p < 0.05), phenylacetic acids (r2 = 0.71, p < 0.05), benzaldehydes (r2 = 0.65, p < 0.05); hippuric acids, pyrogallols, and catechols did not show a linear dose response.\n\nBODY.3. RESULTS.3.3. INTER-INDIVIDUAL VARIABILITY:\nThe coefficient of variation (CV) expressed as a percentage was calculated for Cmax and AUC for each individual plasma metabolite after consumption of juice containing 787 mg TP; the %CV was used to assess the inter-individual variability after consumption of the juice (Figure 2). The CV of the AUC for the total (sum of all 60) metabolites was 53% and the CV for Cmax was 51%. The CV for the Cmax varied between 43% for dihydroferulic acid 4-O-sulfate and 216% for vanillic acid. The CV for the AUC varied between 48% for 4-hydroxybenzaldehyde and 163% for 1-methylpyrogallol-O-sulfate.\n\nBODY.3. RESULTS.3.4. URINARY RECOVERY:\nThe same 60 (poly)phenol metabolites quantified in plasma were quantified in urine. Urinary excretion of (poly)phenol metabolites over the 24 h period post cranberry juice consumption is shown in Figure 3. Between 38 and 48 mg total urinary metabolites were excreted post-consumption of the five juices with varying TP content. The mean percentage recovery varied between 10.1% for the juice containing 406 mg TP down to 2.1% for the juice containing the highest TP content (1910 mg). The juice containing the least TP (409 mg) had a higher percentage excretion compared to all the other juices (p < 0.001). A positive, linear dose response for total excretion over 24 h of each urinary metabolite was assessed and of the 60 identified metabolites, 12 (poly)phenol metabolites displayed a positive linear dose-response curve (r2 ≥ 0.66) in urine with slopes significantly different from zero (p < 0.05); they were: 2,3-dihydrobenzoic acid (r2 = 0.79), 2,4-dihydrobenzoic acid (r2 = 0.77), dihydrocaffeic-3-O-sulfate (r2 = 0.66), ferulic-O-4-sulfate (r2 = 0.89), o-courmaric acid (r2 = 0.72), quercetin-3-O-ß-d-glucuronide (r2 = 0.89), 2,5-dihydroxybenzoic acid (r2 = 0.80), chlorogenic acid (r2 = 0.72), p-coumaric acid (r2 = 0.90), sinapic acid (r2 = 0.85), benzoic acid (r2 = 0.99), isoferulic acid (r2 = 0.74). The sum of (poly)phenol metabolites did not exhibit a linear dose response (r2 = 0.32, p = 0.25) when plotted vs. (poly)phenol content in the interventions.\n\nBODY.4. DISCUSSION:\nIn this study, we quantified 60 poly(phenol) metabolites in plasma and urine after consumption of cranberry juice containing five differing amounts of (poly)phenols. Other than previous work by our group where the same 60 metabolites were quantified in a single concentration of cranberry juice [22], this is a much larger number of metabolites than usually reported after cranberry juice consumption [23,24,25]. Further, here we present quantities of glucuronidated and sulfated metabolites in addition to the methylated and native metabolites, unlike many of the previous works looking at cranberry (poly)phenol metabolites. This study demonstrates that the most abundant plasma (poly)phenol metabolites post-ingestion of cranberry juice are 3-(4-hydroxyphenyl) propionic acid, hippuric acid, and catechol-O-sulfate; this was regardless of the amount of TP in the drink. Despite quantifying a comparatively large number of metabolites in this investigation, it should be noted that this is by no means the entirety of the cranberry (poly)phenol metabolome. Further understanding of the metabolic pathway and synthesis of further metabolites as standards is essential to fully elucidate this. This study investigated the relationship between the amount of (poly)phenols in cranberry juice and the levels of the corresponding metabolites in plasma. These data show that selected plasma (poly)phenol metabolites correlate with the amount of (poly)phenols ingested with cranberry juice, showing a linear dose response. However interestingly, this does not apply to all (poly)phenol metabolites. Although the correlation between the intake amount and AUC of the metabolite concentration is a parameter which is seldom assessed, of the 14 (poly)phenols identified as showing a dose response in this study, only three (2,4-dihydroxybenzoic acid, syringic acid, and ferulic acid) have previously been shown to correlate with the amount of (poly)phenols ingested [17]. However, this was after blueberry intake, which has a different native TP profile, and further, the investigators used an enzymatic treatment prior to quantification. Therefore, all glucuronidated, sulfated, and unconjugated metabolites would have been quantified together. Unlike the previous work after blueberry ingestion, no metabolites exhibited a significant negative correlation. Other investigations into the (poly)phenol dose relationship with metabolites do not report regression analysis in this way [15,16]; such an investigation in cranberry (poly)phenols has not previously been published. Despite the fact that less than a quarter of the 60 metabolites measured showed linear dose dependence, when assessing the sum of the plasma metabolites, these were positively correlated with the amount of (poly)phenols ingested in the drink. This suggests that those 14 metabolites are key in driving the overall correlation. Many of the metabolites assessed exhibited a very large inter-individual variability (CV up to 216%). However, each metabolite did not exhibit the same level of variability as has been discussed previously [26]. Plus, when looking at the total metabolites (sum of 60 quantified), the variability was much lower (53%), suggesting that it is not a matter of individuals poorly/effectively metabolizing poly(phenols) as a whole, but it is actually down to an individual's ability to generate specific metabolites. There are a number of factors that will influence the metabolism and absorption. These factors may include sex, genetic polymorphisms of transporters or metabolizing enzymes, environmental influences, and likely the composition of the gut microbiome. This inter-individual variability in relation to (poly)phenols has been suggested previously [27] and may be down to the variation in the make-up of the volunteer's gut microbiota. Recent work has shown that (poly)phenols are able to modulate the levels of gut bacteria, but it is also becoming more evident that the gut microbiota plays a very important role in (poly)phenol metabolism [28]. Thus, characterizing the native gut bacteria and investigating relationships between species of bacteria and specific metabolites will be necessary to understand this complex relationship between (poly)phenols and the gut microbiota. The urinary recovery of the (poly)phenols was low (≤10%), although in line with urinary recoveries after other (poly)phenol-rich foods. This could be due to not capturing the entire profile of (poly)phenol metabolites and hence underestimating urinary excretion, which is feasible. Further, fecal metabolites were not considered and it is possible that a big proportion of metabolites are lost via this route, in particular the high-molecular-weight proanthocyanidins, which are not absorbed to a big extent. The method of quantification of proanthocyanidins in cranberry that we used can also explain in part the low recovery, as previously discussed [22]. Here we used the OS4-dimethylaminocinnamaldehyde (DMAC) method rather than the often used BL DMAC method. The latter overestimates (poly)phenol levels due to the use of procyanidin A2 as a standard instead of a better representation of cranberry proanthocyanidin complexity; OS DMAC includes a standardized extract comprehending larger polymers [29]. Therefore, using what we believe is a more appropriate method of quantification of (poly)phenols in cranberry juice may explain the low urinary recovery. This is specifically important in regards to this study as proanthocyanidins contribute to ~30% of the total TP amount in the cranberry beverages tested. Although the excretion of 12 of the 60 metabolites exhibited a linear dose response, the sum of the 60 metabolites did not. Unlike the plasma metabolites, the response of the sum of the urinary metabolites seemed to plateau beyond 787 mg TP cranberry juice. Although excretion dose response is seldom investigated, similar plateaus in response to physical parameters (e.g., flow-mediated dilation) have been observed by us, in response to cranberry juice [13] and blueberry [30]. We acknowledge the limitations within this study, for example there may be metabolites that we have not measured which may impact the recovery. There are a number of compounds which escape phase I and II metabolism and undergo microbial catabolism. These compounds are transformed into small-molecular-weight compounds that are no longer polyphenol intake–specific (e.g., catechols, pyrogallols, etc.), hence making it impossible to know which are derived from the ingested (poly)phenols and which from the background diet. In addition, although the plasma was taken under the control of the analyst, most of the urine was collected under free-living conditions, and as our method of quantification was based solely on urine volume, this could have caused some error. Further, although participants were instructed regarding lifestyle and dietary adjustment prior to the study days, we had little control over this aspect. Therefore, this could have been an important factor increasing the inter-individual variation and also the differing Tmax between cranberry juices. Although in the study we intentionally investigated the effect of cranberry juice on healthy, young, male subjects, broadening the inclusion criteria (for example, including women) or modifying the inclusion criteria (deviating from healthy participants) could have impacted our findings and could be an interesting avenue of investigation. Taken together, we show that after consuming cranberry juice containing 409, 787, 1238, 1534, and 1910 mg total (poly)phenols, there is a linear dose-dependent increase in circulating plasma (poly)phenol metabolites, but not in urinary excretion. We note that not all individual metabolites exhibit the same effect with regard to linear regression, and we identify specific plasma metabolites that may be driving this overall effect. Further, herein we note the large inter-individual variation in the metabolic profile in response to (poly)phenol consumption and suggest that this may be in part due to inter-individual variation in the gut microbiome.\n\n**Question:** Compared to nothing what was the result of cranberry juices on 14 (poly)phenol metabolites?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
841
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n = 210) ( n = 286) ( n = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Endovenous laser ablation on Bending the legs without discomfort, Lifting heavy objects without discomfort, Moving from standing to sitting without discomfort, Standing still for a long time (>?15 min) without discomfort, Walking short distances (<?20 min) without discomfort, Walking long distances (>?20 min), Driving a car, Doing housework, Looking after children, Partial return to normal work/employment, Full return to normal work/employment, Going out socially, Sporting activity or exercise, ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
604
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Premedication with midazolam prior to cesarean delivery in preeclamptic parturients: A randomized controlled trial\n\n ABSTRACT.BACKGROUND:\nAnxiety is a concern in obstetrics, especially in preeclamptic mothers. Sedation is not commonly used in parturients for fear of adverse neonatal effect. We investigated maternal and neonatal outcome of midazolam as an adjuvant to spinal anesthesia for elective cesarean delivery.\n\nABSTRACT.METHODS:\nA prospective randomized controlled trial, in which eighty preeclamptic parturients received either an intravenous dose of 0.035 mg/kg of midazolam or an equal volume of normal saline, 30 min before spinal anesthesia. Maternal anxiety was assessed using Amsterdam Preoperative Anxiety and Information Scale (APAIS); postoperative maternal satisfaction was assessed using Maternal Satisfaction Scale for Cesarean Section (MSSCS). Newborns were assessed using Apgar score, Neonatal Neurologic and Adaptive Capacity Score (NACS), and umbilical artery blood gases.\n\nABSTRACT.RESULTS:\nMothers premedicated with midazolam showed a lower level of preoperative anxiety and a higher degree of postoperative satisfaction than the control group. There were no between-group differences regarding the neonatal outcome.\n\nABSTRACT.CONCLUSION:\nPreeclamptic parturients premedicated with midazolam (0.035 mg/kg) before spinal anesthesia have lower anxiety and higher postoperative satisfaction levels, with no adverse effects on the newborns.\n\nBODY.INTRODUCTION:\nNeuraxial anesthesia is progressively used for elective cesarean delivery as it is associated with the best maternal and neonatal outcomes. In preeclampsia, spinal anesthesia provides hemodynamic stability and best risk-benefit outcome; it avoids the detrimental effects of general anesthesia on maternal blood pressure and the risk of difficult intubation.[1] By the mid-1990s, clinical trials demonstrated the safety of neuraxial anesthesia as an accepted alternative to general anesthesia for preeclamptic patients.[234] Recently, a focused review confirmed this fact.[5] One study has reported that 60–80% of patients suffers from preoperative anxiety.[6] Some studies showed a clear correlation between anxiety and preeclampsia.[78] Anxiety is a particular concern in obstetrics, as it may lead to autonomic vasoconstriction in the uterine arteries, which may induce fetal distress.[910] Parturients may endure a great level of anxiety and discomfort during the placement of the spinal needle; a condition could be worsened by preeclampsia. Sedation for neuraxial blockade in pregnant patients remained controversial, based on its potential adverse effects on the fetus, such as decreased motor tone and respiratory compromise. In the 1960s, several case reports showed decreased motor tone in newborns related to maternal diazepam.[1112] In contrast, midazolam is three to four times more potent than diazepam and is used mainly for maternal sedation. Although it crosses the placental barrier, in small doses, it does not cause any adverse effects in the neonate.[13] However, when used in high doses of 0.3 mg/kg for the induction of general anesthesia, the umbilical vein to maternal vein ratio was high (0.96), and the elimination half-life in the neonate was 6.3 h.[14] In the 1980s, a few reports discussed the intravenous (iv) administration of midazolam during pregnancy without providing much information about its use close to delivery.[1516] More recent studies investigated the sedative effect of midazolam with or without narcotics on normal parturients.[171819] However, the impact of midazolam as a sedative on preeclamptics has not been sufficiently studied prospectively. In this concern, we decided to study the maternal and neonatal outcomes of a sedative dose of midazolam in these cases.\n\nBODY.INTRODUCTION.AIMS:\nThis work aimed to determine the effect of premedication with midazolam on preoperative anxiety, perioperative hemodynamics, and postoperative satisfaction of preeclamptic parturients scheduled for elective cesarean delivery under spinal anesthesia. It was also designed to observe the adverse effects of midazolam on the neonatal outcome.\n\nBODY.METHODS:\nAfter approval of our Institutional Ethical Committee and obtaining informed written consents from all parturients included in this study, this prospective randomized controlled study was conducted on eighty full-term, of American Society of Anesthesiologists (ASA) Class I or II, 18–40-year-old parturients presenting with mild preeclampsia as defined by the American College of Obstetricians and Gynecologists.[20] They were scheduled for elective cesarean delivery under spinal anesthesia after excluding the contraindications of technique. This study was performed in our university hospital from October 1, 2014, to September 30, 2015. Emergency cases, patients with evidence of intrauterine growth restriction or fetal compromise, patients with severe preeclampsia, thrombocytopenia, HELLP syndrome (hemolysis, elevated liver enzymes, and low platelet levels), parturients receiving magnesium therapy, or with a body mass index >30 were excluded from the study. History of cardiac, pulmonary, hepatic, renal, neuropsychiatric or other diseases or substance abuse also were excluded from the study. All patients had fetal heart monitoring preoperatively to exclude fetal heart rate (HR) abnormalities. Preoperatively, a complete blood count, prothrombin time, liver and kidney function tests were done and recorded for each participant. All patients were assessed clinically and investigated for the exclusion of any of the above-mentioned contraindications. All patients were randomly allocated into two groups, midazolam group (Group M) and normal saline control group (Group S), using a computer-generated randomization list and a sealed envelope technique (n = 40/group). The study solution (midazolam or normal saline) was prepared in a 2 mL volume, by an anesthesiologist not involved in the care of the woman or collecting the data for the study. Another anesthesiologist who was blinded to the study provided perioperative care and collected perioperative data. All patients were booked on a morning list, instructed for the same preoperative fasting protocol and were educated as to how to complete the questionnaires required for the study. On arrival of the patients to the preoperative waiting room, routine monitoring (Electrocardiography, pulse oximeter probe, and a noninvasive blood pressure cuff) were applied, and wide-bore venous access was established for the infusion of lactated Ringer's solution. Thirty minutes before the induction of spinal anesthesia, group (M) was given iv premedication with midazolam (0.035 mg/kg) in 2 mL solution, whereas group (S) was given an equal quantity of normal saline. The Amsterdam Preoperative Anxiety and Information Scale (APAIS) was used in this study for the objective analysis of anxiety in all patients [Figure 1]. It consists of six questions receiving a score between one (none) and five (most), with a total score of 30, investigating patients' concerns and anxieties. It is short, reliable, and easy to administer. Figure 1Amsterdam Preoperative Anxiety and Information Scale APAIS was applied twice by the same anesthesia assistant in a preoperative visit to all patients in their rooms 1⁄2 h before their transfer to the operating theater (A1), and 5 min after the administration of the study drug (A2). Twenty minutes before the induction of spinal anesthesia, all patients received a rapid iv infusion of 500 mL of lactated Ringer's solution as a preload, followed by a replacement rate of 15 mL/kg/h. For spinal anesthesia, 2.5 mL of intrathecal hyperbaric bupivacaine (0.5%) was injected using a 25-gauge spinal needle with patients in the lateral decubitus position at the L 3/4 interspace, under strict aseptic precautions. Parturients were then immediately placed in the tilted supine position. The level of sensory block was determined with cold and pinprick tests. Oxygen by nasal cannula (2 L/min) was applied to all patients. Urinary catheterization was performed, and surgery started when a sufficient level of sensory block (T4) was achieved. The maternal HR, systolic blood pressure (SBP), and diastolic blood pressure (DBP) noninvasive arterial pressure were recorded on arrival to the preoperative waiting room (T0), 5 min after injecting the tested drug (T1), during the insertion of spinal needle (TS), 2 min (T2), and 5 min (T5) after the induction of spinal anesthesia. Hypotension was defined as a drop of SBP <100 mmHg or a drop >20% from (T1), which was considered as the baseline value. SBP <90 mmHg was treated with ivboluses of ephedrine in increments of 10 mg. Tachycardia was defined as HR >100 and bradycardia when HR <60. When HR was lower than 50 beats/min, atropine 0.5 mg ivwas administered. Once the baby was delivered, the umbilical cord was double-clamped, and a blood sample from an umbilical artery (UA) was collected for blood gas analysis. Basic neonatal examination was performed, and Apgar scores were recorded at min 1 and 5, and the Neonatal Neurological and Adaptive Capacity Scoring (NACS) was measured and recorded at 15 min.[21] Postoperatively, patients were kept in the recovery room for 1 h and then sent to the ward. The Maternal Satisfaction Scale for cesarean section (MSSCS)[22] is a 22-item questionnaire specifically designed to evaluate maternal satisfaction with neuraxial anesthesia for elective cesarean delivery. Each item has a 7-point scale with minimum total score of 22 and maximum of 154, a higher score representing higher satisfaction. It was applied to all patients on the 1st postoperative day between 8:00 am and 1:00 pm. The development of seizures in cases of mild preeclampsia is extremely rare; however, if it took place, our protocol was to treat it with magnesium sulfate (loading dose of 4–6 g followed by continuous infusion of 1-2 g/h),[120] and the case was dropped out from the study. For sample size calculation, preoperative anxiety was taken as a primary parameter of interest. The power analysis is based on independent sample t-test and a 25% decrease of the APAIS score; the α value was 0.05 and the power (1-β) of the study was 0.80. Depending on the previous studies,[1823] it was calculated that a minimum of 36 patients would be required per group. However, we enrolled eighty patients (40/group) to allow for drop-outs. Data were analyzed using the SPSS statistics program (Version 16, SPSS Inc., Chicago, IL, USA). According to the type of data, they were represented as mean and standard deviation or frequencies and percentages. Comparisons of the two studied groups were performed using independent sample t-test or Mann–Whitney U-test as appropriate while repeated measures within the same group were compared using paired t-test. In all tests, results were considered statistically significant if P < 0.05.\n\nBODY.RESULTS:\nEighty cases were initially investigated, but five cases were excluded during the study; two cases in group (M) and one case in group (S) due to an inadequate spinal block, and one case in each group because of failure to collect umbilical arterial blood. In both groups, there were no significant differences regarding age, body weight, height, and spinal injection to delivery time as shown in Table 1. Table 1 Patients characteristics in both groups After administration of the tested drug (midazolam or normal saline), there was a significant reduction in APAIS readings (A2) in group (M); with a significant difference in (A2) between the two groups while there was no between-group difference before administration (A1) as shown in Table 2. Maternal Satisfaction Scale for cesarean section showed a significant level of maternal satisfaction in group (M) compared to group (S). There was no difference between the groups regarding the newborn Apgar or NACS scores as shown in Table 2. No significant differences were found between the two groups regarding the pH, PaCO2, PaO2, and base deficit in the umbilical arterial samples as listed in Table 3. Table 2 Amsterdam Preoperative Anxiety and Information Scale, Maternal Satisfaction Scale for cesarean section, Apgar score and neonatal neurologic and adaptive capacity score in both groups Table 3 Blood gas analysis of umbilical artery blood samples in both groups Regarding maternal hemodynamics, baseline HR, SBP, and DBP were comparable between the two groups before the administration of the tested drug. However, 5 min after injecting the tested drug, there was a significant decrease in all parameters in group (M) compared to group (S). During insertion of the spinal needle, there was a rise in HR and SBP in group (S) compared to group (M) while DBP was at a lower level in group (M). After the induction of spinal anesthesia and placing the patient into the tilted supine position, there was an increase in HR with some decrease in blood pressure parameters in both groups as shown in Tables 4–6. Table 4 Maternal heart rate in both groups Table 5 Systolic blood pressure in both groups Table 6 Diastolic blood pressure in both groups Hypotension occurred in three patients in the group (M) and one patient in the group (S), requiring a single bolus of ephedrine for each. No bradycardia was observed in any patient. Hemodynamic parameters then remained stable and comparable in both groups.\n\nBODY.DISCUSSION:\nThis study has shown a significant improvement of preoperative anxiety and postoperative satisfaction in parturients premedicated with midazolam compared to those in the control group, with no significant neonatal adverse effects. The study also documented improved maternal hemodynamics in the treated group before the administration of spinal anesthesia. However, there was no significant difference between groups regarding maternal hemodynamics throughout the rest of the procedure. The progressive increase in the number of cesarean sections[24] draws the attention to the psychological aspects of parturients, including their level of anxiety and perception of obstetric care they received.[25] Some studies tried to solve the problem of preoperative anxiety via nonpharmacological measures, such as the use of a preoperative information video in obstetric[26] and nonobstetric[27] patients, but their results were disappointing. However, other studies applied a preoperative information system and proved its effectiveness on the level of anxiety and postoperative satisfaction in nonobstetric patients.[282930] Midazolam is a commonly used sedative as it is fast-acting and of short duration, making it superior to other benzodiazepines. It is a lipophilic drug that can cross the placenta by passive diffusion. In large induction doses, midazolam and its metabolites cross the placenta in both animal[31] and human[14] studies. However, early in 1984, Kanto et al.[32] proved its excellent sedative effect following cesarean delivery performed under epidural anesthesia. They gave a relatively large iv dose (0.075 mg/kg) following baby removal, resulted in a rapid and marked sedative effect lasted for 2–3 h while patients were completely cooperative in the recovery room. Frölich et al.[19] gave a single iv dose of 0.02 mg/kg midazolam plus 1 μg/kg fentanyl to parturients undergoing cesarean delivery shortly before spinal needle insertion. Similar to our study, Apgar scores showed no difference between the treated group and the control group. They evaluated maternal anxiety with two methods; maternal plasma catecholamine levels at the time of delivery and three postoperative questions. They found a higher level of patient satisfaction in the treated group; otherwise, there were no between-group differences. In contrast, Senel and Mergan[18] gave a single iv dose of 0.025 mg/kg midazolam to parturients 30 min before cesarean delivery. They evaluated maternal anxiety using APAIS score and newborns with Apgar and NACS scores. They reported significantly lower anxiety scores in the treated groups, without any adverse neonatal outcome. However, they recommended further clinical trials with higher sample size to assess the larger doses of midazolam. Similar findings were found by Fung et al. in 1992, where 90% of mothers fell asleep with midazolam premedication before the start of surgery, with no adverse neonatal outcome; comparing the Apgar scores and umbilical venous pH values to the control group.[17] We chose the midazolam dose (0.035 mg/kg) based on these previous studies,[1819] and according to our own experience with Egyptian parturients, aiming for the best degree of maternal satisfaction without causing maternal hypoventilation or neonatal depression. We chose a single bolus based on body weight; however, in clinical practice, some may prefer to titrate iv drugs to effect. Thus, our results cannot be compared to other studies with repeated doses or iv infusion. The timing of premedication is also crucial. We chose it 30 min before the induction of spinal anesthesia, to allow enough time for suppression of anxiety; an important consideration in preeclampsia, whose cardiovascular responses are exaggerated. This is in contrast to Frölich et al., who gave their premedication immediately before the spinal anesthesia procedure. We employed several parameters of neonatal outcome. The Apgar scoring system is routinely used, due to its clear physiological correlation with neonatal well-being.[33] It is easily applied and gives a quick idea about newborn's clinical condition and efficiency of resuscitation efforts. We also used the NACS scoring system, a neurobehavioral scale that allows for the evaluation of potential effects of neonatal drug exposure. It is designed to distinguish drug-induced neonatal depression from that secondary to asphyxia. It stresses on motor tone as a main indicator of drug-induced depression.[21] However, Brockhurst et al. indicate its poor reliability regarding obstetric anesthesia research.[21] We also recorded UA blood gas values and noted no significant difference between groups. Again, the reliability of this test to detect neonatal depression has been questioned.[3435] That is why we employed a combination of several parameters to assess neonatal outcome. As part of this study, we evaluated the postoperative maternal satisfaction of anesthesia and birth. It is an important outcome that is commonly reported;[253637] since in healthy parturients, a pleasant experience is expected. Furthermore, anxiety may decrease the level of overall satisfaction with perioperative care.[38] In 1998, Robinson et al. highlighted some important factors that may affect satisfaction in the obstetric patient.[36] The development of a reliable and valid satisfaction scale for cesarean delivery has allowed to assess the effectiveness of sedatives on maternal satisfaction.[22] MSCCS has proven more sensitive than the standard VAS as it samples from the various physiological and psychological factors that make up satisfaction.[39] The reliability of a scale is directly related to the number of its items so that VAS would be expected to be of low reliability.[22] Our data revealed a remarkable level of maternal satisfaction in the group premedicated with midazolam compared to the control group. To the best of our knowledge, none of the studies concerned with sedatives during cesarean delivery under spinal anesthesia focused on this scale. Instead, Frölich et al. applied a simple questionnaire of three questions while Senel and Mergan and Fung et al. did not focus on this important outcome. The main limitation of this study was the inability to measure the placental transfer of midazolam and its plasma level in the fetal circulation, due to financial difficulties. However, this had been widely investigated by several authors.[143140] Another limitation is this study only enrolled mild preeclamptic patients. Thus, the extension of these findings to those with more severe features of preeclampsia (and potentially more adverse uteroplacental blood flow) remains to be investigated.\n\nBODY.CONCLUSION:\nA dose of 0.035 mg/kg of iv midazolam, administered 30 min before spinal anesthesia, is associated with improved preoperative anxiety, postoperative satisfaction, and maternal hemodynamics in preeclamptic parturients scheduled for elective cesarean delivery, with no adverse effects on the newborn.\n\nBODY.CONCLUSION.FINANCIAL SUPPORT AND SPONSORSHIP:\nNil.\n\nBODY.CONCLUSION.CONFLICTS OF INTEREST:\nThere are no conflicts of interest.\n\n**Question:** Compared to equal volume of normal saline, 30 min before spinal anesthesia what was the result of intravenous dose of 0.035 mg/kg of midazolam on neonatal outcome?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
864
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Statin efficacy in the treatment of hepatitis C genotype I\n\n ABSTRACT.BACKGROUND:\nLipid metabolism is one of the hepatitis C virus (HCV) life cycle steps. Statins can reduce cholesterol level and finally can decrease HCV replication. Thus, we assessed the effect of Statins in combination with standard antiviral treatment on hyperlipidemic genotype I HCV infected patients.\n\nABSTRACT.MATERIALS AND METHODS:\nThis study was a prospective clinical trial. 40 patients were selected from those referred to educational and Therapeutic Centers of Isfahan University of Medical Sciences from 2009 to 2010 with confirmed HCV viremia. All patients received Peg-interferon-a2a and ribavirin. 20 hyperlipidemic Patients received 20 mg atorvastatin nightly for 3 months and placebo was prescribed for 20 normolipidemic HCV infected patients as a control group. Liver enzymes and complete blood count were checked monthly and thyroid stimulating hormone was checked every 3 months. We also performed quantitative HCV-ribonucleic acid (RNA) test in 12th week of therapy, at the end of treatment and 6 months after therapy for all samples.\n\nABSTRACT.RESULTS:\nWe didn't find any significant differences in the mean of HCV-RNA numbers between statin and placebo groups in 12th week of treatment, in the end of treatment and 6 months after treatment (P > 0.05).\n\nABSTRACT.CONCLUSION:\nAtorvastatin has no effect on the mean of HCV viral load when we added it to standard treatment for hepatitis C infection. Further studies are necessary to examine the possible antiviral properties of statins and their potential role as adjuncts to standard HCV therapy.\n\nBODY.INTRODUCTION:\nHepatitis C virus (HCV) is an important human pathogen, not only because of its high prevalence and worldwide burden, but also because of the potentially serious complications of persistent HCV infection. These complications include cirrhosis, hepatocellular carcinoma, and end-stage liver disease necessitating liver transplantation. The incidence rates for all of these complications are expected to rise in the near future.[12] It is estimated that 130-170 million people have chronic HCV (about 3% of the world population).[34] More than 3,00,000 people die every year from hepatitis C related liver diseases.[5] There is no vaccination for HCV, so the treatment is very important. The standard medication of HCV (pegylated interferon (IFN) alpha plus ribavirin [RBV]) in HCV genotype I infected patients have shown 40-50% sustained virological response (SVR),[3] so we need more effective treatment method. Lipid metabolism is one of the HCV life cycle steps. HCV forms a replication complex in lipid raft membrane that is full of cholesterol and sphingolipids.[67] 3-hydroxy-3 methyl-glutaryl Coenzyme A inhibitors (statins) can reduce cholesterol level and finally can decrease HCV replication. Hence, statin therapy can be introduced as a complementary treatment. In vitro examinations have found that some statins, especially fluvastatin and atorvastatin can inhibit HCV replication,[8] although statins should be used with caution in advanced end-stage liver disease because of decompensation risk.[9] Recently, beneficial effect of statin use among patients with HCV-related liver disease has been suggested. In vitro studies show that high concentrations of statins disrupt HCV replication through depletion of isoprenoid geranylgeranyl pyrophosphate.[1011] Statins may thus have antiviral effects through mechanisms not related to lipid metabolism.[1213] The low-density lipoprotein (LDL) receptor and the high-density lipoprotein scavenger receptor B1 putatively facilitate HCV entry into hepatocytes. Complex host proteins are found to be closely associated with HCV nonstructural proteins. The process which links these host and HCV proteins is termed prenylation. Statin agents which block the formation of the lipid precursors for prenylation, could theoretically interfere with viral replication.[141516] Some human studies have done for assessing the effect of statins in hepatitis c treatment, but their results are different. O'Leary et al. have shown no reduction in HCV-ribonucleic acid (RNA) titers with 20 mg atorvastatin.[17] However, another study indicates non-dose-dependent discretion in HCV-RNA in %50 of patients,[18] the purpose of this study was to determine the effect of statin therapy on standard antiviral treatment response in hyperlipidemic genotype I HCV patients.\n\nBODY.MATERIALS AND METHODS.SELECTION OF PATIENT GROUPS:\nOur study was a prospective clinical trial which was done on genotype I HCV patients who had been referred to educational and therapeutic centers of Isfahan University of Medical Sciences between years 2009 and 2010. All patients were tested to confirm HCV viremia by Cobas Amplicor HCV Monitor test, version 2 (Roche Diagnostics, Mannheim, Germany). We selected our patients using simple random sampling. Subjects in the statin group were hyperlipidemic patients according to WHO definition for hyperlipidemia which is LDL cholesterol level more than 160 mg/dl, 130 mg/dl for patient with one risk factor and 100 mg/dl for patients with diabetes mellitus, 2 or more than risk factors. Patients in the placebo group were selected from HCV patients without hyperlipidemia. This study protocol was approved by Institutional Review Board and Ethics Committee of Isfahan University of Medical Sciences (research project number: 309079). Patients were included in the study if they were naïve for HCV treatment, more than 18 years old and patients who did not have any clinical sign for decompensate Cirrhosis. All cases who were affected by other genotypes of HCV and chronic liver disease such as hepatitis B virus, HIV, Wilson disease, hemochromathosis, autoimmune liver disease, primary biliary cirrhosis, secondary biliary cirrhosis, obstructive liver disease, viral infections that affect liver function, and who had the history of alcohol drinking and drugs related steatosis and hepatotoxicity were excluded from our study. Patients were excluded from the study if the level of their liver enzymes (alanine transaminase and aspartate aminotransferase) had risen >3 times during statin treatment. Patients were examined clinically before antiviral treatment. Fasting blood sugar, plasma lipids, liver enzymes tests and liver and abdomen sonography (Hitachi, Tokyo Japan) were done 2 times (with 1 month apart) before treatment for all of them. We also checked their blood pressure, waist size, and body mass index (BMI) according to related guideline.[19] All patients weekly received 180 μg Peg-IFN-a2a and 1000, 1200 and 1400 mg RBV for weights <75 kg, 75-100 kg, and more than 100 kg, respectively (Pegasys Co.). Statin group received 20 mg atorvastatin (Farabi Co., Tehran, Iran) nightly for 3 months and placebo was prescribed for the placebo group the same as atorvastatin in the statin group. Liver enzymes and complete blood count were checked monthly and thyroid stimulating hormone checked every 3 months. We also did quantitative HCV-RNA test to reveal patients with undetectable test in 12th week of therapy (early virological response [EVR]), after therapy (end of treatment virological response) and 6 months after therapy SVR for all samples. We analyzed our data using SPSS version 16 (Inc., Chicago, IL, USA) and considered P < 0.05 as valuable, Chi-square was applied for comparison of categorical parameters and Student's t-test for comparing the continuous parameters.\n\nBODY.RESULT:\nOur study included 40 HCV positive patients (20 hyperlipidemic and 20 normolipidemic patients). Our results showed that there are significant differences in the mean of BMI, triglyceride, LDL, and cholesterol between statin and placebo groups [Table 1]. Table 1 demographic and lipid character in statin and placebo group The evaluation of viral load before statin therapy did not show a significant difference between statin and placebo, also, we didn't find any significant differences in the mean of HCV-RNA numbers in statin and placebo groups in 12th week of Statins therapy (P > 0.05), at the end of treatment (P > 0.05) and 6 months after treatment (P > 0.05) [Table 2]. Our findings also implied that EVR in statin and placebo were 75 and 70% and SVR was 95% in both groups. ALT level was higher in statin group before treatment [Table 3]. Although both AST and ALT level rose after treatment, we did not find any difference between AST and ALT level after treatment. Table 2 Comparison between viral load before treatment and after start of treatment between statin and placebo groups Table 3 comparison between transaminases in statin and placebo group before and after treatment \n\nBODY.DISCUSSION:\nIn the current study, we didn't find any significant differences in the mean of viral load of hepatitis c in statin and placebo groups in 12th week of statin therapy, at the end of treatment and 6 months after treatment. Our findings also implied that early response to treatment (EVR) in statin and placebo were 70 and 75% and Sustain response to treatment (SVR) in statin and placebo was 95% in both groups. A pilot study of 31 HCV-infected veterans who were given fluvastatin 20-320 mg/day for 2-12 weeks with weekly monitoring of HCV-RNA and liver tests reported modest reductions of viral load.[20] Furthermore, a pharmacoepidemiologic study found that the use of lovastatin was associated with a 40-50% lower incidence of moderate as well as severe liver injury among patients with preexisting liver disease.[21] Even though, prior individual studies examined important aspects of the association between statin and lowering the severity if liver disease in HCV-infected patients, these studies either did not adjust for histological severity of liver disease, had generally short follow-up, or did not use placebo subjects. Clearly, more information is needed about the possible beneficial effect of statins in HCV-infected.[22] The result of other in vivo studies has been same as our findings in this study. A study which has done by O'Leary et al. (2007) on 10 samples show that atorvastatin does not inhibit HCV-RNA replication in vivo.[17] Forde et al. in a cross-sectional study (2009) confirmed that statins don't have any apparent effect on HCV-RNA replication in vivo.[23] Unlike in vivo studies the result of in vitro research has shown that statins can decrease HCV-RNA replication.[3] Ikeda et al. evaluated the effect of statins on HCV-RNA replication in OR6 cells.[24] Fluvastatin exhibited the strongest anti-HCV activity while atorvastatin and simvastatin showed moderate inhibitory effects. The combination of IFN and the statins exhibited strong inhibitory effects on HCV-RNA replication.[2526] There are some reasons for differences between in vitro and in vivo studies. As we told before various factors affect on antiviral treatment response in the human body such as obesity, insulin resistance and liver steatosis[1516] so we need to do more studies to examine the effect of statins on HCV in relation to confounder factors. On the other hand, statins concentration in cell cultures is higher than their concentration in plasma. Studies have shown that serum levels of such agents after prolonged therapy are significantly lower than the statin concentrations those were used in replicon systems.[27] In our study, response rate of patients were higher than western studies, there are some similar reports that show high response rate of combination therapy in Iranian patients, it may be due to lower viral load in comparison to western patients.[2829] In summary in our study statins did not have any significant effect on HCV virus number and virologic response. Our limitation in this study was few sample size, and further studies are necessary to examine the possible antiviral properties of statins and their potential role as adjuncts to standard HCV therapy.\n\nBODY.AUTHORS’ CONTRIBUTION:\nAll authors have contributed in designing and conducting the study. Ash, AB, SS, SSh, and Nkh collected the data and MM, MKh, MT, Ash, and BA did the analysis. All authors have assisted in preparation of the first draft of the manuscript or revising it critically for important intellectual content. All authors have read and approved the content of the manuscript and are accountable for all aspects of the work.\n\n**Question:** Compared to placebo what was the result of 20 mg atorvastatin nightly for 3 months on mean of HCV-RNA numbers 6 months after treatment ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
868
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Short-term effects of physiotherapy combining repetitive facilitation\nexercises and orthotic treatment in chronic post-stroke patients\n\n ABSTRACT:\n[Purpose] This study investigated the short-term effects of a combination therapy\nconsisting of repetitive facilitative exercises and orthotic treatment. [Subjects and\nMethods] The subjects were chronic post-stroke patients (n=27; 24 males and 3 females;\n59.3 ± 12.4 years old; duration after onset: 35.7 ± 28.9 months) with limited mobility and\nmotor function. Each subject received combination therapy consisting of repetitive\nfacilitative exercises for the hemiplegic lower limb and gait training with an ankle-foot\northosis for 4 weeks. The Fugl-Meyer assessment of the lower extremity, the Stroke\nImpairment Assessment Set as a measure of motor performance, the Timed Up & Go test,\nand the 10-m walk test as a measure of functional ambulation were evaluated before and\nafter the combination therapy intervention. [Results] The findings of the Fugl-Meyer\nassessment, Stroke Impairment Assessment Set, Timed Up & Go test, and 10-m walk test\nsignificantly improved after the intervention. Moreover, the results of the 10-m walk test\nat a fast speed reached the minimal detectible change threshold (0.13 m/s). [Conclusion]\nShort-term physiotherapy combining repetitive facilitative exercises and orthotic\ntreatment may be more effective than the conventional neurofacilitation therapy, to\nimprove the lower-limb motor performance and functional ambulation of chronic post-stroke\npatients.\n\nBODY.INTRODUCTION:\nThe mobility of many stroke survivors is limited, and most identify walking as a top\npriority for rehabilitation1). One way to\nmanage ambulatory difficulties is with an ankle-foot orthosis (AFO) or a foot-drop splint,\nwhich aims to stabilize the foot and ankle while weight-bearing and lift the toes while\nstepping1). In stroke rehabilitation,\nvarious approaches, including robotic assistance, strength training, and\ntask-related/virtual reality techniques, have been shown to improve motor function2). The benefits of a high intensity stroke\nrehabilitation program are well established, and although no clear guidelines exist\nregarding the best levels of intensity in practice, the need for its incorporation into a\ntherapy program is widely acknowledged2). Repetitive facilitative exercises (RFE), which combine a high repetition rate and\nneurofacilitation, are a recently developed approach to rehabilitation of stroke-related\nlimb impairment2,3,4,5). In the RFE program, therapists use muscle spindle stretching and\nskin-generated reflexes to assist the patient's efforts to move an affected joint5). Previous studies have shown that an RFE\nprogram improved lower-limb motor performance (Brunnstrom Recovery Stage, foot tapping, and\nlower-limb strength) and the 10-m walk test in patients with brain damage3). An AFO is an assistive device to help stroke patients with hemiplegia walk and stand. A\nproperly prescribed AFO can improve gait performance and control abnormal kinematics arising\nfrom coordination deficits6). Gait training\nwith an AFO has been also reported to improve gait speed and balance in post-stroke\npatients7, 8). Therefore, we hypothesized that short-term physiotherapy combining RFE and orthotic\ntreatment would improve both lower-extremity motor performance and functional ambulation.\nThe present study aimed to confirm the efficacy of a combination therapy consisting of RFE\nfor the hemiplegic lower limb and gait training with AFO.\n\nBODY.SUBJECTS AND METHODS:\nThe subjects consisted of 27 inpatients (24 males and 3 females) diagnosed with cerebral\nhemorrhage (15 patients) or cerebral infarction (12 patients). The patients' average age was\n59.3 ± 12.4 years (33–73 years), the duration after onset was 35.7 ± 28.9 months (5–115\nmonths), and Brunnstrom Stage medians and quartiles of the hemiplegic lower limb were Stage\n4.0 and 4–4.5 (stage 3–6), respectively. Eleven patients had right hemiplegia, and 16\npatients had left hemiplegia. Among the 27 study participants, one used a rigid AFO with a\nmedial stainless steel upright9), two used\na posterior spring leaf10), and 24 used a\nhinged AFO10). The inclusion criteria were as follows: age, 30–80 years; hemiplegia of the lower limb\n(Brunnstrom stage 3–6); ability to walk without assistance using a T-cane and/or AFO;\ndiagnosis of hemiplegia due to stroke; morbidity period, 5 months or more; ability to\nunderstand the purpose of the study and follow instructions, and agreement to participate in\nthis study. The exclusion criteria were as follows: onset of stroke, <4 weeks previously;\nabnormal gait prior to the onset of stroke (such as joint disability or peripheral\nneuropathy); any medical condition that limited the study design (such as severe\ncardiopulmonary disease or severe sensory disturbance); severe aphasia and dementia that\nmade it impossible to follow verbal instructions; and lesions on both sides of the cerebral\nhemisphere. The procedures complied with the 1975 Declaration of Helsinki, as revised in 2013. The\nstudy was conducted after obtaining approval from the ethics committee of the Tarumizu Chuo\nHospital, and all participants provided written informed consent. The subjects were enrolled in a before-after study. Intervention was combination therapy\nconsisting of RFE for the hemiplegic lower limb and gait training with AFO. According to a\nprevious study, all subjects underwent an RFE program consisting of 7 specific exercise\npatterns3), which were used to elicit\nmovement of the hip, knee, and ankle in a manner designed to minimize synergistic movements.\nThis technique involved the use of rapid passive stretching of the muscles in conjunction\nwith tapping and rubbing the skin to assist in generating contractions of the targeted\nmuscles5). Exercises were performed as\ntwo sets of 50 repetitions with a 1–2 minute rest period in between sets5). In addition, all patients underwent gait\ntraining with a self-made AFO. This intervention was performed 40 minutes/day, 6 days/week for 4 weeks. Outcomes were\nmeasured before intervention and after 4 weeks of intervention. The outcome measures used to\nassess motor performance were the Fugl-Meyer Assessment of the lower extremity (FMA-LE)11) and the Stroke Impairment Assessment Set\n(SIAS)12). Functional ambulation was\nassessed with a Timed Up & Go Test (TUG)13) and a 10-m walk test (10MWT). To determine whether physiotherapy that combined RFE and orthotic treatment improved the\nlower-limb motor performance and functional ambulation, the Wilcoxon Signed-Rank Test was\nperformed, because the Shapiro-Wilk's test showed that the data were not normally\ndistributed. The analysis was performed with the statistical analysis program SPSS\nStatistics for Windows version 22.0 (IBM Corporation, Armonk, NY, USA) with a significance\nlevel of α=0.05.\n\nBODY.RESULTS:\nTable 1Table 1.Lower-limb motor performance and functional ambulation at the baseline and after\nthe combining trainingOutcome measurementsBaselineAfter the trainingDifference mean ± SDFMA-LE22.96 ± 4.0725.85 ± 4.032.89 ± 2.99**SIAS46.59 ± 8.3453.63 ± 7.637.04 ± 4.60**TUG (sec)17.35 ± 5.5714.02 ± 4.46−3.33 ± 3.52**10MWTCGS (m/sec)0.68 ± 0.220.81 ± 0.240.12 ± 0.09**FGS (m/sec)0.80 ± 0.280.96 ± 0.310.16 ± 0.16****Significant difference p<0.01. SD: standard deviation; FMA-LE: the Fugl-Meyer\nAssessment of the lower extremity; SIAS: Stroke Impairment Assessment Set; TUG: Timed\n\"Up & Go\" test; 10MWT: 10-m walk test; CGS: comfortable gait speed; FGS: fast gait\nspeed shows the changes in FMA-LE, SIAS, TUG, and 10MWT (comfortable gait speed and\nfast gait speed). In terms of lower-limb motor performance, FMA-LE increased significantly\nfrom 22.96 ± 4.07 to 25.85 ± 4.03 (p<0.01), and SIAS increased significantly from 46.59 ±\n8.34 to 53.63 ± 7.63 (p<0.01). In terms of functional ambulation, TUG decreased\nsignificantly from 17.35 ± 5.57 seconds to 14.02 ± 4.46 seconds (p<0.01), comfortable\ngait speed increased significantly from 0.68 ± 0.22 (m/sec) to 0.81 ± 0.24 (m/sec)\n(p<0.01), and fast gait speed increased significantly from 0.80 ± 0.28 (m/sec) to 0.96 ±\n0.31 (m/sec) (p<0.01).\n\nBODY.DISCUSSION:\nIn this study, short-term combination therapy consisting of RFE and orthotic treatment was\nconducted to improve the lower-limb motor performance and functional ambulation of chronic\npost-stroke patients. There were statistically significant improvements in FMA-LE, SIAS,\nTUG, and 10MWT after the intervention. Furthermore, the results of the 10MWT at a fast gait\nspeed reached the minimal detectible change threshold (0.13 m/s)14). Recently, some systemic reviews of AFO have reported that gait training with AFO can\nimprove walking ability and balance in people with stroke1, 15). However, few studies\nhave focused on the correlation between AFO and motor performance changes of the lower limb\nin post-stroke patients16,17,18). Changes seen in\nthis study were more marked in the lower-limb motor performance, with a substantial\nclinically meaningful change in fast walking speed (0.13 m/s) being achieved by all\nparticipants who completed the study protocol. The results of this study show that\nshort-term combination therapy consisting of RFE and gait training with AFO may enhance\nlower-limb motor function, thereby improving walking ability in patients with chronic\nstroke, which is beneficial for comprehensive stroke treatment. Several studies suggested that RFE might promote functional recovery of hemiplegia and\nactivities of daily living to a greater extent than conventional neurofacilitation therapy,\nusing a randomized controlled design2,3,4,5, 19, 20). Especially, RFE with other interventions\n(i.e., neuromuscular electrical stimulation, direct application of vibratory stimulation\nrepetitive transcranial magnetic stimulation, and pharmacological treatments) may be more\neffective than RFE only for the recovery of limb motor performance. In the present study,\npatients who received RFE with gait training with AFO showed significant functional and\nambulatory improvements. This study had some limitations because it involved a small number of subjects, and its\nintervention period of four weeks was short. In addition, this was not a randomized\ncontrolled trial, and the efficacy of RFE with orthotic treatment could not be compared;\ntherefore, it is difficult to generalize its results. Further, this study could not exclude\nobserver bias and subject bias because the same staff implemented assessment and\ntraining. In conclusion, the differences in FMA-LE, SIAS, TUG, and 10MWT demonstrated that\napplication of a short-term combination therapy consisting of RFE and orthotic treatment has\nbeneficial therapeutic effects on improving functional ambulation and motor performance of\nthe lower limb in chronic post-stroke patients.\n\nBODY.DISCUSSION.CONFLICT OF INTEREST:\nThe authors have no conflicts of interest to declare.\n\n**Question:** Compared to before repetitive facilitative exercises and orthotic treatment. what was the result of after 4 weeks of repetitive facilitative exercises and orthotic treatment. on FMA-LE: the Fugl-Meyer Assessment of the lower extremity; SIAS: Stroke Impairment Assessment Set;?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
869
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Short-term effects of physiotherapy combining repetitive facilitation\nexercises and orthotic treatment in chronic post-stroke patients\n\n ABSTRACT:\n[Purpose] This study investigated the short-term effects of a combination therapy\nconsisting of repetitive facilitative exercises and orthotic treatment. [Subjects and\nMethods] The subjects were chronic post-stroke patients (n=27; 24 males and 3 females;\n59.3 ± 12.4 years old; duration after onset: 35.7 ± 28.9 months) with limited mobility and\nmotor function. Each subject received combination therapy consisting of repetitive\nfacilitative exercises for the hemiplegic lower limb and gait training with an ankle-foot\northosis for 4 weeks. The Fugl-Meyer assessment of the lower extremity, the Stroke\nImpairment Assessment Set as a measure of motor performance, the Timed Up & Go test,\nand the 10-m walk test as a measure of functional ambulation were evaluated before and\nafter the combination therapy intervention. [Results] The findings of the Fugl-Meyer\nassessment, Stroke Impairment Assessment Set, Timed Up & Go test, and 10-m walk test\nsignificantly improved after the intervention. Moreover, the results of the 10-m walk test\nat a fast speed reached the minimal detectible change threshold (0.13 m/s). [Conclusion]\nShort-term physiotherapy combining repetitive facilitative exercises and orthotic\ntreatment may be more effective than the conventional neurofacilitation therapy, to\nimprove the lower-limb motor performance and functional ambulation of chronic post-stroke\npatients.\n\nBODY.INTRODUCTION:\nThe mobility of many stroke survivors is limited, and most identify walking as a top\npriority for rehabilitation1). One way to\nmanage ambulatory difficulties is with an ankle-foot orthosis (AFO) or a foot-drop splint,\nwhich aims to stabilize the foot and ankle while weight-bearing and lift the toes while\nstepping1). In stroke rehabilitation,\nvarious approaches, including robotic assistance, strength training, and\ntask-related/virtual reality techniques, have been shown to improve motor function2). The benefits of a high intensity stroke\nrehabilitation program are well established, and although no clear guidelines exist\nregarding the best levels of intensity in practice, the need for its incorporation into a\ntherapy program is widely acknowledged2). Repetitive facilitative exercises (RFE), which combine a high repetition rate and\nneurofacilitation, are a recently developed approach to rehabilitation of stroke-related\nlimb impairment2,3,4,5). In the RFE program, therapists use muscle spindle stretching and\nskin-generated reflexes to assist the patient's efforts to move an affected joint5). Previous studies have shown that an RFE\nprogram improved lower-limb motor performance (Brunnstrom Recovery Stage, foot tapping, and\nlower-limb strength) and the 10-m walk test in patients with brain damage3). An AFO is an assistive device to help stroke patients with hemiplegia walk and stand. A\nproperly prescribed AFO can improve gait performance and control abnormal kinematics arising\nfrom coordination deficits6). Gait training\nwith an AFO has been also reported to improve gait speed and balance in post-stroke\npatients7, 8). Therefore, we hypothesized that short-term physiotherapy combining RFE and orthotic\ntreatment would improve both lower-extremity motor performance and functional ambulation.\nThe present study aimed to confirm the efficacy of a combination therapy consisting of RFE\nfor the hemiplegic lower limb and gait training with AFO.\n\nBODY.SUBJECTS AND METHODS:\nThe subjects consisted of 27 inpatients (24 males and 3 females) diagnosed with cerebral\nhemorrhage (15 patients) or cerebral infarction (12 patients). The patients' average age was\n59.3 ± 12.4 years (33–73 years), the duration after onset was 35.7 ± 28.9 months (5–115\nmonths), and Brunnstrom Stage medians and quartiles of the hemiplegic lower limb were Stage\n4.0 and 4–4.5 (stage 3–6), respectively. Eleven patients had right hemiplegia, and 16\npatients had left hemiplegia. Among the 27 study participants, one used a rigid AFO with a\nmedial stainless steel upright9), two used\na posterior spring leaf10), and 24 used a\nhinged AFO10). The inclusion criteria were as follows: age, 30–80 years; hemiplegia of the lower limb\n(Brunnstrom stage 3–6); ability to walk without assistance using a T-cane and/or AFO;\ndiagnosis of hemiplegia due to stroke; morbidity period, 5 months or more; ability to\nunderstand the purpose of the study and follow instructions, and agreement to participate in\nthis study. The exclusion criteria were as follows: onset of stroke, <4 weeks previously;\nabnormal gait prior to the onset of stroke (such as joint disability or peripheral\nneuropathy); any medical condition that limited the study design (such as severe\ncardiopulmonary disease or severe sensory disturbance); severe aphasia and dementia that\nmade it impossible to follow verbal instructions; and lesions on both sides of the cerebral\nhemisphere. The procedures complied with the 1975 Declaration of Helsinki, as revised in 2013. The\nstudy was conducted after obtaining approval from the ethics committee of the Tarumizu Chuo\nHospital, and all participants provided written informed consent. The subjects were enrolled in a before-after study. Intervention was combination therapy\nconsisting of RFE for the hemiplegic lower limb and gait training with AFO. According to a\nprevious study, all subjects underwent an RFE program consisting of 7 specific exercise\npatterns3), which were used to elicit\nmovement of the hip, knee, and ankle in a manner designed to minimize synergistic movements.\nThis technique involved the use of rapid passive stretching of the muscles in conjunction\nwith tapping and rubbing the skin to assist in generating contractions of the targeted\nmuscles5). Exercises were performed as\ntwo sets of 50 repetitions with a 1–2 minute rest period in between sets5). In addition, all patients underwent gait\ntraining with a self-made AFO. This intervention was performed 40 minutes/day, 6 days/week for 4 weeks. Outcomes were\nmeasured before intervention and after 4 weeks of intervention. The outcome measures used to\nassess motor performance were the Fugl-Meyer Assessment of the lower extremity (FMA-LE)11) and the Stroke Impairment Assessment Set\n(SIAS)12). Functional ambulation was\nassessed with a Timed Up & Go Test (TUG)13) and a 10-m walk test (10MWT). To determine whether physiotherapy that combined RFE and orthotic treatment improved the\nlower-limb motor performance and functional ambulation, the Wilcoxon Signed-Rank Test was\nperformed, because the Shapiro-Wilk's test showed that the data were not normally\ndistributed. The analysis was performed with the statistical analysis program SPSS\nStatistics for Windows version 22.0 (IBM Corporation, Armonk, NY, USA) with a significance\nlevel of α=0.05.\n\nBODY.RESULTS:\nTable 1Table 1.Lower-limb motor performance and functional ambulation at the baseline and after\nthe combining trainingOutcome measurementsBaselineAfter the trainingDifference mean ± SDFMA-LE22.96 ± 4.0725.85 ± 4.032.89 ± 2.99**SIAS46.59 ± 8.3453.63 ± 7.637.04 ± 4.60**TUG (sec)17.35 ± 5.5714.02 ± 4.46−3.33 ± 3.52**10MWTCGS (m/sec)0.68 ± 0.220.81 ± 0.240.12 ± 0.09**FGS (m/sec)0.80 ± 0.280.96 ± 0.310.16 ± 0.16****Significant difference p<0.01. SD: standard deviation; FMA-LE: the Fugl-Meyer\nAssessment of the lower extremity; SIAS: Stroke Impairment Assessment Set; TUG: Timed\n\"Up & Go\" test; 10MWT: 10-m walk test; CGS: comfortable gait speed; FGS: fast gait\nspeed shows the changes in FMA-LE, SIAS, TUG, and 10MWT (comfortable gait speed and\nfast gait speed). In terms of lower-limb motor performance, FMA-LE increased significantly\nfrom 22.96 ± 4.07 to 25.85 ± 4.03 (p<0.01), and SIAS increased significantly from 46.59 ±\n8.34 to 53.63 ± 7.63 (p<0.01). In terms of functional ambulation, TUG decreased\nsignificantly from 17.35 ± 5.57 seconds to 14.02 ± 4.46 seconds (p<0.01), comfortable\ngait speed increased significantly from 0.68 ± 0.22 (m/sec) to 0.81 ± 0.24 (m/sec)\n(p<0.01), and fast gait speed increased significantly from 0.80 ± 0.28 (m/sec) to 0.96 ±\n0.31 (m/sec) (p<0.01).\n\nBODY.DISCUSSION:\nIn this study, short-term combination therapy consisting of RFE and orthotic treatment was\nconducted to improve the lower-limb motor performance and functional ambulation of chronic\npost-stroke patients. There were statistically significant improvements in FMA-LE, SIAS,\nTUG, and 10MWT after the intervention. Furthermore, the results of the 10MWT at a fast gait\nspeed reached the minimal detectible change threshold (0.13 m/s)14). Recently, some systemic reviews of AFO have reported that gait training with AFO can\nimprove walking ability and balance in people with stroke1, 15). However, few studies\nhave focused on the correlation between AFO and motor performance changes of the lower limb\nin post-stroke patients16,17,18). Changes seen in\nthis study were more marked in the lower-limb motor performance, with a substantial\nclinically meaningful change in fast walking speed (0.13 m/s) being achieved by all\nparticipants who completed the study protocol. The results of this study show that\nshort-term combination therapy consisting of RFE and gait training with AFO may enhance\nlower-limb motor function, thereby improving walking ability in patients with chronic\nstroke, which is beneficial for comprehensive stroke treatment. Several studies suggested that RFE might promote functional recovery of hemiplegia and\nactivities of daily living to a greater extent than conventional neurofacilitation therapy,\nusing a randomized controlled design2,3,4,5, 19, 20). Especially, RFE with other interventions\n(i.e., neuromuscular electrical stimulation, direct application of vibratory stimulation\nrepetitive transcranial magnetic stimulation, and pharmacological treatments) may be more\neffective than RFE only for the recovery of limb motor performance. In the present study,\npatients who received RFE with gait training with AFO showed significant functional and\nambulatory improvements. This study had some limitations because it involved a small number of subjects, and its\nintervention period of four weeks was short. In addition, this was not a randomized\ncontrolled trial, and the efficacy of RFE with orthotic treatment could not be compared;\ntherefore, it is difficult to generalize its results. Further, this study could not exclude\nobserver bias and subject bias because the same staff implemented assessment and\ntraining. In conclusion, the differences in FMA-LE, SIAS, TUG, and 10MWT demonstrated that\napplication of a short-term combination therapy consisting of RFE and orthotic treatment has\nbeneficial therapeutic effects on improving functional ambulation and motor performance of\nthe lower limb in chronic post-stroke patients.\n\nBODY.DISCUSSION.CONFLICT OF INTEREST:\nThe authors have no conflicts of interest to declare.\n\n**Question:** Compared to before repetitive facilitative exercises and orthotic treatment. what was the result of after 4 weeks of repetitive facilitative exercises and orthotic treatment. on TUG: Timed \"Up & Go\" test;?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
871
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Expanding access to parasite-based malaria diagnosis through retail drug shops in Tanzania: evidence from a randomized trial and implications for treatment\n\n ABSTRACT.BACKGROUND:\nTanzania has seen a reduction in the fraction of fevers caused by malaria, likely due in part to scale-up of control measures. While national guidelines require parasite-based diagnosis prior to treatment, it is estimated that more than half of suspected malaria treatment-seeking in Tanzania initiates in the private retail sector, where diagnosis by malaria rapid diagnostic test (RDT) or microscopy is illegal. This pilot study investigated whether the introduction of RDTs into Accredited Drug Dispensing Outlets (ADDOs) under realistic market conditions would improve case management practices.\n\nABSTRACT.METHODS:\nDispensers from ADDOs in two intervention districts in Tanzania were trained to stock and perform RDTs and monitored quarterly. Each district was assigned a different recommended retail price to evaluate the need for a subsidy. Malaria RDT and artemisinin-based combination therapy (ACT) uptake and availability were measured pre-intervention and 1 year post-intervention through structured surveys of ADDO owners and exiting customers in both intervention districts and one contiguous control district. Descriptive analysis and logistic regression were used to compare the three districts and identify predictive variables for testing.\n\nABSTRACT.RESULTS AND DISCUSSION:\nA total of 310 dispensers from 262 ADDOs were trained to stock and perform RDTs. RDT availability in intervention ADDOs increased from 1% (n = 172) to 73% (n = 163) during the study; ACT medicines were available in 75% of 260 pre-intervention and 68% of 254 post-intervention ADDOs. Pre-treatment testing performed within the ADDO increased from 0 to 65% of suspected malaria patients who visited a shop (95% CI 60.8–69.6%) with no difference between intervention districts. Overall parasite-based diagnosis increased from 19 to 74% in intervention districts and from 3 to 18% in the control district. Prior knowledge of RDT availability (aOR = 1.9, p = 0.03) and RDT experience (aOR = 1.9, p = 0.01) were predictors for testing. Adherence data indicated that 75% of malaria positives received ACT, while 3% of negatives received ACT.\n\nABSTRACT.CONCLUSIONS:\nTrained and supervised ADDO dispensers in rural Tanzania performed and sold RDTs under real market conditions to two-thirds of suspected malaria patients during this one-year pilot. These results support the hypothesis that introducing RDTs into regulated private retail sector settings can improve malaria testing and treatment practices without an RDT subsidy. \nTrial registration ISRCTN ISRCTN14115509\n\nBODY.BACKGROUND:\nMalaria prevalence among children under 5 years old in Tanzania declined from 18% in 2007 to 9% in 2011, whereas 2-week fever prevalence remained approximately 20% over the same time period, from 19 to 20% [1, 2]. This reduction in the proportion of fevers caused by malaria is likely due at least in part to Tanzania's recent scale-up of prevention and treatment measures [3–5]. Presumptive treatment of fevers with artemisinin-based combination therapy (ACT) thus will increasingly result in incorrect malaria diagnoses, prescription and wastage of inappropriate medications, and subsequent delays in obtaining effective treatment for the true cause of illness [6–9]. The Tanzania National Malaria Control Programme (NMCP) case management guidelines require suspected malaria cases to receive parasite-based diagnosis by microscopy or rapid diagnostic test (RDT) prior to treatment with anti-malarial drugs [10, 11]. In practice, however, up to 54% of those seeking treatment for suspected malaria in Tanzania first visit the private retail sector, where malaria diagnostics are currently prohibited from being sold and administered [12]. As a result, parasite-based testing is not received by many suspected malaria patients. The Tanzanian private retail sector includes both unregistered outlets and a network of more than 6000 registered shops or Accredited Drug Dispensing Outlets (ADDOs) (duka la dawa muhimu or DLDM in Kiswahili) regulated by the Pharmacy Council. Unlike unregistered outlets, ADDOs are permitted to stock and sell both over-the-counter medicines and certain classes of prescription medications, including ACT [13–15]. To obtain a Pharmacy Council permit each year, ADDOs owners must meet certain conditions related to the premises, training and certification of a dispenser who may or may not be the owner, and the products stocked. Like all other private retail outlets, RDTs may not be sold or performed at ADDOs, however. Malaria RDTs have been safely administered by non-medical personnel in several previous settings [16–18]. The introduction of RDTs into ADDOs has the potential to improve fever case management by increasing availability, access, and use of parasite-based diagnosis. It is unclear, however, whether customers will be willing to pay the extra cost for diagnosis, whether ADDO owners would encourage RDT use, whether RDTs might require subsidization to encourage uptake in the private sector, and whether treatment choices would adhere to test results. To investigate these questions, this pilot evaluated the operational feasibility of selling RDTs through ADDOs and measured changes in suspected malaria patient case management that occurred as a result of making RDTs available at two different prices.\n\nBODY.METHODS.STUDY AREA AND POPULATION:\nThis pilot was conceived and designed in partnership with the NMCP and Pharmacy Council, with a primary objective of informing national policymaking on legalizing the stocking and performance of RDTs through ADDOs. The study was conducted in three districts in Morogoro Region: Kilombero, Kilosa, and Mvomero. These three districts were selected due to their high density of ADDOs, moderate Plasmodium falciparum prevalence compared with national data (13% in Morogoro Region [2]), and convenient proximity to Dar es Salaam. The three districts are mostly rural, with a total estimated population of 1.7 million people in 2012 [19]. Peak malaria incidence corresponds to the two rainy seasons, one between March and May and the other between September and December. Kilombero and Kilosa were assigned via random number generator as the two intervention areas where ADDO dispensers were trained to stock, sell, and administer RDTs, and Mvomero served as the control area (Fig. 1).Fig. 1Map of the study area\n\n\nBODY.METHODS.INTERVENTION DESIGN:\nAll licensed ADDO dispensers currently working in an ADDO in the two intervention areas were invited to participate in the study through the District Malaria Focal Person, who is responsible for overseeing local malaria-related activities under the leadership of the District Medical Officer. In April–May 2013, six two-day trainings were held in each intervention district, each led by a national-level trainer and the district malaria focal person. Trainings covered signs and symptoms of uncomplicated and severe malaria, stocking, use, and disposal of RDTs, and appropriate case management based on RDT results. Dispensers were instructed to prescribe Tanzania's first-line treatment, artemether + lumefantrine (commonly referred to as ALu in Tanzania), to test-positive customers based on an ALu dosing reference chart provided to the dispensers. Dispensers were trained to refer customers with signs and symptoms of severe illness, suspected malaria patients who tested negative for malaria, and suspected malaria patients whose illness did not improve within 48 h to the nearest public health facility along with the results of their malaria test. Upon passing both practical and written evaluations, certified RDT dispensers were provided with an ID badge granting permission to perform RDTs in their ADDO, a sharps box, an RDT performance and case management job aid, and a weather-proof storefront sign advertising that malaria testing was available. Permission was also granted for the ADDO dispenser to prescribe an ALu to all patients who tested positive for malaria. To mimic real market conditions, existing supply chains were utilized: ADDOs in the two districts were instructed to purchase ParaHIT® Ag Pf RDTs from seven wholesalers. The wholesalers, in turn, were instructed to purchase RDTs from an importer in Dar es Salaam, who purchased the RDTs directly from the manufacturer for a fixed price pre-negotiated by the research team. Importers, wholesalers and ADDOs all agreed to fixed mark ups. ADDOs in Kilosa were asked to sell RDTs for a recommended retail price (RRP) of 1100 Tanzanian Shillings (USD $0.67 in May 2013) each, based on willingness-to-pay responses from pre-intervention exit interviews, analysis of markups in analogous commodities, and price negotiation. In Kilombero, ADDOs were asked to sell RDTs for 50% less or 500 Tanzanian Shillings (USD $0.32 in May 2013) [20]. To enable the ADDOs in Kilombero to charge the lower price, the research team asked the importer to sell RDTs to wholesalers in Kilombero at a 50% discount. On a monthly basis, the importer received financial compensation from the research team equal to the total discount provided to wholesalers located in Kilombero. The research team monitored stock levels at the importer level to prevent stock outs. Gloves were included in the RDT boxes. A unique blue-and-white \"mRDT\" checkmark logo was placed on ParaHIT® boxes, ID badges, job aids, and storefront signs in order to build brand recognition. Trained study staff provided supportive supervision during quarterly monitoring visits. At each visit, the certified dispenser at each participating ADDO was observed performing an RDT on a patient and evaluated according to a 17-point checklist adapted from the WHO \"Checklist for direct observation of health workers performing Rapid Diagnostic Tests (RDT) for malaria\" [21]. The supervisors also reported on stocking safety, shop hygiene, and waste disposal practices [22].\n\nBODY.METHODS.STUDY DESIGN:\nThe impact of the intervention was assessed by comparing the change in availability and use of RDTs at ADDOs in the intervention districts with those in the control district, Mvomero, where ADDOs were not given access to the RDTs, training, or supervision.\n\nBODY.METHODS.STUDY DESIGN.OUTLET SURVEYS:\nRDT and ACT availability were measured through two cross-sectional ADDO surveys, one conducted prior to the dispenser training in March 2013 and the second a year after the 2013 training, in May 2014. All ADDOs were eligible for selection for the pre-intervention survey, while only ADDOs with certified dispensers were eligible to participate in the post-intervention survey of the intervention districts. Ninety-one ADDOs per district were selected using random number generation based on sample-size calculations to ensure 80% power. The sample size is assumed to be sufficient to detect a five percentage-point difference in ADDO RDT availability. A Kiswahili or English-version structured questionnaire was used during the face-to-face interviews with ADDO dispensers depending on the language preference of the respondent. The primary outcome was RDT availability, defined as the proportion of ADDOs with RDTs in stock on the day of the survey and a trained, certified dispenser present to administer the test. A secondary outcome was ACT availability, defined as the proportion of ADDOs that reported having ACT medicines in stock for the 30 days prior to the survey.\n\nBODY.METHODS.STUDY DESIGN.CUSTOMER EXIT INTERVIEWS:\nFace-to-face exit interviews were conducted prior to the dispenser training (March 2013) and a year after the intervention began (May 2014) using a structured questionnaire in Kiswahili. Customers eligible for the exit interview were at least 18 years old and either seeking treatment in the ADDO for fever or suspected malaria or attempting to purchase an anti-malarial for themselves or someone else (the \"patient\"). A sample size of 400 eligible customers per district was estimated based on two-tailed sample size calculations designed to detect a 5%-point difference in RDT availability or 80% power. Pre-intervention, 1–3 customers were interviewed at each sampled ADDO. The sample size was halved for the pre-intervention survey based on very low predicted availability of RDTs and a primary goal of comparing similarity between districts. Post-intervention, enumerators attempted to survey all eligible customers during one full day at the ADDO in order to self-weight traffic variation between shops. If the customer and the patient were not the same person, demographic data were collected on both the customer and the patient, and illness data were collected about the patient. Customers were asked whether patients (either themselves or others) had received diagnostic testing at the ADDO, had been previously tested elsewhere, how much they had paid for RDTs at the ADDO, and whether they had purchased anti-malarial treatment.\n\nBODY.METHODS.STUDY DESIGN.STATISTICAL ANALYSIS:\nChi square tests were used to compare differences before and after intervention and between intervention and control districts in (a) the proportion of patients seeking treatment at ADDOs who received a parasite-based diagnostic test, (b) the median price paid for an RDT in an ADDO, (c) the proportion of patients who received a parasite-based diagnosis, (d) adherence to test results in terms of receiving ACT when testing positive and not receiving one when testing negative, and (e) the proportion of test-negative patients who received an antibiotic. Logistic regression models were then used to identify factors associated with receiving an RDT in an ADDO or receiving a parasite-based diagnosis anywhere, with shop included as a random effect to account for repeated sampling of visits to the same ADDOs. Tested covariates included study district and survey, demographic variables (customer and patient gender, age, completed education, and wealth quintile) as well as RDT knowledge and practice indicators regarding previous experience and attitudes about testing. Wealth quintiles were derived from a wealth index based on a set of household asset questions estimated using principal components analysis using similar methodology to Demographic and Health Surveys [23]. Covariates with p < 0.2 in bivariable models were included in multivariable models. Logistic regression models were also used to compare odds of treatment outcomes controlling for study district, survey, parasite-based diagnosis, and test result.\n\nBODY.RESULTS:\nNinety-five percent of ADDO dispensers passed the 2-day training. In total, 310 dispensers from 262 ADDOs, 164 (147 ADDOs) from Kilombero in the lower-priced or \"subsidized\" district and 146 (115 ADDOs) from Kilosa the higher priced or \"unsubsidized\" district, were certified and allowed to sell and perform RDTs. The number of outlet surveys and exit interviews completed are presented in Table 1.Table 1Total shop and exit interviews conducted and included in analysisDistrictPre-intervention (March 2013)Post-intervention (May 2014)ADDOs surveyedADDOs with ACT in Stocka\nTotal Exit InterviewsInterviews where patient present and ACT in Stockb\nADDOs surveyedADDOs with ACT in Stocka\nTotal Exit InterviewsInterviews where patient present and ACT in Stockb\nSubsidized (Kilombero)875991547652266217Unsubsidized (Kilosa)856682468763330247Control (Mvomero)886986599157359244Total260194259159254172955708\na'ACT in stock' defined as no stock-outs in the past 30 days\nbOnly customer interviews from shops with ACT medicines in stock and the patient present were included in analysis (i.e., the patient had the opportunity to be tested and purchase an ACT medicine)\n\n\nBODY.RESULTS.DIAGNOSTIC AND TREATMENT AVAILABILITY IN ADDOS:\nPre-intervention outlet surveys found that RDTs were not available in any surveyed ADDOs. Post-intervention, 73% of ADDOs in intervention districts had RDTs in stock and an RDT-certified dispenser present. The proportion of ADDOs with a certified dispenser and RDTs available was higher in the \"unsubsidized\" district Kilosa (85%, 95% CI 78–92%) than the \"subsidized\" district Kilombero (64%, 95% CI 55–72%, p < 0.001). Across the study districts, the proportion of ADDOs with ACTs available was 75% pre-intervention and 68% post-intervention. ACT availability did not differ significantly between districts or pre- or post-intervention (p > 0.05 for all).\n\nBODY.RESULTS.CUSTOMER AND PATIENT CHARACTERISTICS:\nA total of 1214 customers seeking treatment for suspected malaria, either for themselves or someone else, were interviewed: 259 pre-intervention and 955 post-intervention (Table 1). Demographic characteristics of customers were generally similar between districts during each survey (Table 2). The proportion of exit interviews in which the patient was present at the ADDO was significantly higher in intervention districts post-intervention. ADDOs were the first place treatment was sought for 75% of patient visits across surveys and districts (Table 3).Table 2Characteristics of anti-malarial customersPre-interventionPost-interventionSubsidized (n = 91)Unsubsidized (n = 82)Control (n = 86)\nχ\n2 p valueSubsidized (n = 266)Unsubsidized (n = 330)Control (n = 359)\nχ\n2 p valueCustomer age (median, range)30 (18–67)34 (18–68)33 (18–83)30 (6–79)32 (12–92)30 (14–82)Male (%, 95% CI)44.0 (33.7–54.3)47.6 (36.6–58.5)52.3 (41.7–63.0)0.53647.4 (41.3–53.4)49.4 (44.0–54.8)51.5 (46.3–56.7)0.585Completed > Primary school (%, 95% CI)15.4 (7.9–22.9)24.4 (15.0–33.8)20.9 (12.2–29.6)0.32721.1 (16.1–26.0)13.9 (10.2–17.7)21.2 (16.9–25.4)0.025Wealth quintile (%, 95% CI) Lowest14.3 (7.0–21.5)30.5 (20.4–40.6)17.4 (9.3–25.5)0.33717.7 (13.1–22.3)18.2 (14.1–22.4)24.8 (20.3–29.3)0.248 Lower20.9 (12.4–29.3)18.3 (9.8–26.8)20.9 (12.2–29.6)21.8 (16.8–26.8)21.9 (17.4–26.4)18.1 (14.1–22.1) Middle23.1 (14.3–31.8)13.4 (6.0–20.9)22.1 (13.2–31.0)22.9 (17.9–28.0)18.2 (14.1–22.4)20.6 (16.4–24.8) Higher20.9 (12.4–29.3) 20.7 (11.9–29.6)20.9 (12.2–29.6) 20.3 (15.5–25.1)21.3 (16.8–25.7)17.5 (13.6–21.5) Highest20.9 (12.4–29.3)17.1 (8.8–25.3)18.6 (10.3–26.9)17.3 (12.7–21.9)20.4 (16.0–24.7)18.9 (14.9–23.0)Patient is customer or is with customer (present at ADDO) (%, 95% CI)59.3 (49.1–69.5)59.1 (45.2–67.0)68.6 (58.7–78.5)0.22181.6 (76.9–86.3)74.8 (70.2–79.5)68.0 (63.1–72.8)0.001\nTable 3Characteristics of patients present at the ADDOPre-interventionPost-interventionSubsidized (n = 54)Unsubsidized (n = 46)Control (n = 59)\nχ\n2 p valueSubsidized (n = 217)Unsubsidized (n = 247)Control (n = 244)\nχ\n2 p valuePatient age (median, range)24.5 (1–66)22 (<1–62)27 (<1–83)23 (<1–79)22 (1–92)24 (1–74)Male (%, 95% CI)48.1 (34.6–61.7)47.8 (33.1–62.5)50.8 (37.8–63.8)0.94147.9 (41.2–54.6)53.0 (46.8–59.3)50.8 (44.5–57.1)0.546Days Ill (median, range)3 (<1, 14)3 (<1, 30)3 (<1, 14)2 (<1, 60)3 (<1, 30)2 (<1, 60)ADDO first place treatment sought (%, 95% CI)70.4 (58.0–82.8)65.2 (51.2–79.2)80.0 (69.2–90.1)0.24077.4 (71.8–83.0)74.8 (69.4–80.2)75.4 (70.0–80.8)0.792Malaria test prior to ADDO Visit (%, 95% CI)16.7 (6.6–26.8)21.7 (9.6–33.9)3.4 (0.0–8.1)0.01511.1 (6.8–15.4)13.4 (9.0–17.8)14.9 (10.2–19.7)0.502\n\n\nBODY.RESULTS.DIAGNOSTIC UPTAKE:\nPre-intervention, no patients present in the ADDOs reported receiving a parasite-based test in an ADDO in any district. Post-intervention, 65% of patients present at ADDOs in the intervention districts (95% CI 60.8–69.6%) and 3% in the control district received an RDT (Fig. 2a). There was no difference in the fraction of eligible patients who were tested between the subsidized (66%, 95% CI 59–72%) and unsubsidized districts (67%, 95% CI 58–71%), (χ\n2 < 0.1, p = 0.8).Fig. 2\na Proportion of patients present at the ADDO who received a parasite-based diagnosis before and after intervention (p < 0.001 for changes in intervention districts; p = 0.159 for the control district). b Proportion of patients present in the ADDO reporting receiving parasite-based diagnostics at any location (p < 0.05 for pre- to post-intervention changes within each district)\n Overall diagnostic uptake (in ADDOs or elsewhere prior to the ADDO visit) improved significantly in the intervention and control groups during the study (Fig. 2b). The proportion of patients present at the ADDO who received a diagnostic test increased from 19% pre-intervention to 74% post-intervention in the intervention districts and from 3 to 18% in the control district (p < 0.01 for both groups). The relative changes in testing were not significantly different between the intervention and control districts (OR = 2.5, p = 0.271). Median prices for RDTs post-intervention reported by customers were 1100 Tanzanian Shillings in Kilosa, the unsubsidized district, and 500 Tanzanian Shillings in Kilombero, the subsidized district. There was no variance in median price from the recommended retail price in either district. No difference in RDT uptake was observed between the unsubsidized and subsidized districts (OR = 1.1, p = 0.868). Patient knowledge and practices were associated with RDT purchasing (Table 4). The odds of purchasing an RDT were significantly higher for patients who had previous experience taking an RDT or were already aware that RDTs were available before coming to the ADDO, while patients who had sought treatment elsewhere before visiting the ADDO were significantly less likely to purchase an RDT.Table 4Factors associated with purchasing an RDT in the ADDO when the patient was presentOutcome: receiving an RDT in an ADDO if present to be tested (n = 464)BivariableMultivariableOR95% CIp valueOR95% CIp valueIntervention No subsidyReference Subsidy1.1(0.6–1.9)0.868Customer gender (if not patient) FemaleReference Male1.2(0.5–2.7)0.654Patient gender FemaleReference Male1.0(0.7–1.4)0.920Customer age (if not patient)1.0(1.0–1.0)0.198Patient age <5 yearsReference 5− <14 years1.5(0.7–3.5)0.332 14+ years0.7(0.3–1.3)0.254Customer education PrimaryReference Secondary and above1.0(0.6–1.6)0.909Wealth index LowestReference Lower1.0(0.5–2.0)0.925 Middle0.9(0.4–1.8)0.723 Higher1.2(0.6–2.5)0.559 Highest1.0(0.5–2.2)0.943Has heard of RDTs3.1(2.0–4.8)<0.0011.7(0.9–3.1)0.088Has taken an RDT before3.1(2.1–4.7)<0.0011.9(1.2–3.0)0.005Sought treatment prior to ADDO visit0.3(0.2–0.5)<0.0010.3(0.2–0.5)<0.001Knew before coming that testing was available in ADDOs3.0(2.0–4.7)<0.0011.9(1.1–3.3)0.032Believes febrile patients should be blood tested before treating NeverReference Sometimes1.1(0.1–8.9)0.925 Always2.0(0.3–13.9)0.474\n\n\nBODY.RESULTS.TREATMENT DECISIONS:\nThe overall proportion of patients who received an ACT increased from 40 to 43% in the control district and decreased from 40 to 32% in the intervention districts, although these changes were not significantly different after controlling for other factors (OR = 0.6, p = 0.294). Forty-one percent of patients who received an RDT tested positive for malaria across the intervention districts (95% CI 35.7–46.8%). Among patients with a positive test, 90% (95% CI 81.9–94.9%) received an anti-malarial, and 75% (95% CI 65.5–83.5%) received ACT. Among customers testing negative, 7% (95% CI 3.6–12.3%) received an anti-malarial (Fig. 3a). There was no significant difference in the proportion of those who did not receive an RDT who purchased an anti-malarial (35% in intervention districts, 41% in control) (Fig. 3b).Fig. 3\na Medications purchased by patients tested in ADDOs (intervention districts combined). b Medications purchased by untested ADDO patients. Percentages are not mutually exclusive; patients may have received more than one treatment type\n The proportion of patients who received an antibiotic did not change significantly comparing intervention and control groups from pre- to post-intervention (OR = 0.9, p = 0.837). Post-intervention, the proportion of patients purchasing antibiotics also did not vary significantly comparing those who tested positive with those who tested negative (10% vs 9%, p = 0.73) (Fig. 3a).\n\nBODY.DISCUSSION:\nThe results from this study suggest that ADDOs offer an important opportunity for improving malaria case management in Tanzania towards the national goal of 80% parasite-based diagnosis of suspected malaria patients. Two-thirds of suspected malaria patients who sought treatment at ADDOs with RDT-certified dispensers purchased RDTs. Improved testing rates through ADDOs in this pilot project resulted in improved targeting of ACTs to patients with malaria, with 90% of malaria-positive and 93% of malaria-negative ADDO-tested patients making appropriate treatment decisions according to their test results. These results add to a growing body of evidence that malaria case management can be performed well outside of the formal public health sector. A pilot in Ghana that measured vendor-reported adherence to test results after the introduction of RDTs to 28 Licensed Chemical Sellers (private retail sector shops) found 89% of positives and 3% of negatives received ACT [24]. Vendors of 59 licensed drug shops in Uganda also reported high uptake (98%) and favorable adherence to test results (ALu was prescribed to 98% of positives and 1% of negatives) in a small-scale pilot study [25]. The higher uptake observed in these studies compared with this one may be attributable to self-reporting by drug shop vendors compared with customer interviews, a lower $0.20 price point (Uganda) compared to $0.31–0.67 in Tanzania, and/or variance in length of training and frequency of supervision. Not all studies have found such positive outcomes, however: a different study in Uganda with 92 shops selling RDTs that measured adherence through household surveys found only 32% of positives purchased an ACT medicine (7% of negatives) at a median $0.40 per test [16]. This pilot was successful in improving case management practices for several reasons. First, ADDOs are a highly utilized source for anti-malarials in Tanzania: ADDOs were the first place treatment was sought for 75% of interviewed patients, emphasizing the need for access to parasite-based diagnosis at this point of care. Second, nearly all ADDOs in the study area participated in the trainings and RDT program. ADDOs were kept engaged in the study through quarterly supervision and monitoring visits, which also allowed for frequent quality assessments on safety protocol. Third, proximity of the region to Dar es Salaam and relatively good road infrastructure allowed for existing supply chains to be utilized, creating a more realistic market scenario. Finally, negotiations with RDT suppliers and agreements with wholesalers and ADDOs on mark ups, ensured affordable recommended retail prices and enabled comparison of price points. These results provided data to help guide several open questions surrounding an introduction of RDTs to the private retail market. A similar proportion of patients were willing to pay the higher 'unsubsidized' price of a 1100 Tanzanian Shillings or USD $0.67 for a RDT compared to the lower unsubsidized price of 500 Tanzania Shillings or USD $0.32 for the subsidized RDT, indicating that a subsidy may not be necessary to encourage testing uptake. Additionally, similar to the Ghana pilot [24], an increase in the purchase of antibiotics among those receiving a negative test result was not observed in this study. The study was also subject to several limitations. A significant pre-intervention difference in parasite-based diagnostic rates between intervention (19%) and control (3%) districts suggests inherent differences in health facility testing uptake that may confound the effect of the intervention. A second limitation was the unforeseen mobility of ADDO dispensers, periodically changing jobs work in different ADDOs during the study period. While dispensers were certified in this study, it was unclear if the certification extended to the ADDO when the dispenser was not present. Dispenser movement complicated follow-up and suggests future consideration for specifying that dispensers and ADDOs be certified, and requiring ADDOs that sell RDTs to have a certified dispenser present. Additional follow-up challenges arose when ADDOs closed temporarily or permanently, or were not reachable for stock delivery or enumeration due to flooded roads and bridges. Also, although dispensers were trained to refer severe and negative cases to the nearest health facility, this study was not resourced to follow the outcomes of these patients. In practice, it is unlikely that all negative cases will complete the referral pathway, underscoring the importance of including diagnosis and treatment of non-malaria febrile illness and recognition of danger signs in ADDO dispenser training. Lastly, the post-intervention survey sampled from ADDOs that participated in the training in the intervention areas, and while these were the majority of the ADDOs in these districts, they are not representative of all ADDOs in the study area. As Tanzania uses the results of this study to inform policy decisions, it will be important to consider how results might vary in different endemic settings and in more urban areas. Expansion of this program will also require evaluation of how best to ensure long-term routine supervision and training, as well as waste management for sharps and cassettes. Finally, a critical challenge to scale-up will be integration of the private retail sector into the routine surveillance system for capturing malaria case data. This information will be imperative for strategic decision-making as malaria prevalence declines across the country.\n\nBODY.CONCLUSIONS:\nThis pilot study contributes to a growing evidence base that introducing RDTs to the private retail sector in low resource settings can increase parasite-based diagnostic rates for malaria and adherence to test results when dispensers are trained and supervised. The pilot study also showed that RDTs can be introduced under real market conditions; utilizing existing supply chains without subsidizing the cost of the RDT. While the intervention piloted here was limited to a rural setting in Tanzania's specialized ADDO network, it provides a basis for policy decisions on scaling up RDT access across Tanzania to increase parasite-based diagnosis and rational use of ACT through heavily used existing channels. The results presented here support the goals for Tanzania's National Strategic Plan and may have applications in similar settings.\n\n**Question:** Compared to Before trainings covered signs and symptoms of uncomplicated and severe malaria, stocking, use, and disposal of RDTs, and appropriate case management based on RDT results. what was the result of After trainings covered signs and symptoms of uncomplicated and severe malaria, stocking, use, and disposal of RDTs, and appropriate case management based on RDT results. on Pre-treatment testing performed within the ADDO?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
873
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Potential benefit of bosentan therapy in borderline or less severe pulmonary hypertension secondary to idiopathic pulmonary fibrosis—an interim analysis of results from a prospective, single-center, randomized, parallel-group study\n\n ABSTRACT.BACKGROUND:\nNo drugs have been approved for the treatment of patients with pulmonary hypertension (PH) secondary to idiopathic pulmonary fibrosis (IPF), particularly those with idiopathic honeycomb lung. This study was conducted to investigate the long-term efficacy and safety of bosentan for PH based on changes in prognosis and respiratory failure.\n\nABSTRACT.METHODS:\nIPF patients with borderline or less severe PH and completely organized honeycomb lung were randomized (1:1) to bosentan or no treatment for PH for 2 years and assessed at baseline and every 6 months for respiratory failure, activities of daily living (ADL), lung and heart functions by right cardiac catheterization, and other parameters. An interim analysis was performed, however, following detection of a significant survival benefit favoring bosentan therapy.\n\nABSTRACT.RESULTS:\nSignificant differences were noted for the bosentan-treated (n = 12) vs. untreated (n = 12) groups in hospital-free survival (603.44 ± 50.074 days vs. 358.87 ± 68.65 days; hazard ratio [HR], 0.19; P = 0.017) and overall survival (671 days vs. 433.78 ± 66.98 days; HR, 0.10; P = 0.0082). Again, significant improvements were noted for the bosentan-treated group from baseline to month 6 or 12 in several indices in ADL, pulmonary circulation, and %DLCO. Without requiring O2 inhalation, bosentan was associated with no increase but a trend toward a decrease in adverse events and an improvement in respiratory status.\n\nABSTRACT.CONCLUSIONS:\nBosentan tended to improve prognosis and ADL without worsening respiratory failure in IPF patients with borderline or less severe PH and completely organized honeycomb lung alone.\n\nABSTRACT.TRIAL REGISTRATION:\nThis study was registered on December 18, 2010 with UMIN-CTR Clinical Trial as UMIN000004749 to investigate the long-term influence of bosentan on cardiac function, as well as its cardioprotective efficacy and safety, in patients with pulmonary hypertension secondary to concurrent COPD and IPF, respectively.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi: 10.1186/s12890-017-0523-2) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nIdiopathic pulmonary fibrosis (IPF) is a disorder associated with poor prognosis. Pulmonary arterial hypertension (PAH), which is likely to lead to right heart overload, is also associated with poor prognosis. Patients with IPF are at risk of developing pulmonary hypertension (PH) as the underlying condition worsens or becomes severer, thus further compromising their prognosis [1–8]. However, it remains unclear how long it may typically take for patients with IPF to start developing PH or for its associated influence on cardiac function to become manifest [1]. Given that, once elevated, pulmonary arterial pressure (PAP) becomes irreversible because of established vascular remodeling [1, 9], it appears that efficacious treatment should be started before its onset. Furthermore, it is assumed that cardiac overload starts even before the onset of PH. However, to date, no drugs have been approved for the treatment of PH secondary to respiratory diseases, such as IPF [1]. Drugs specific for PAH, such as bosentan (Tracleer Tablets®), have been reported in some studies to effectively improve PH in patients with respiratory diseases, such as COPD and IPF [1, 10–13]. While the results of randomized controlled studies conducted to date, such as BUILD-1 and BUILD-3 studies [11], appear to argue against the use of bosentan in patients with pulmonary hypertension (PH), these studies involved a wide range of patients from those with fibrotic idiopathic interstitial pneumonia (f-IIP) and fibrotic nonspecific interstitial pneumonia (f-NSIP) to those with highly elevated pulmonary arterial pressure (PAP) and decreased cardiac index (CI), where the presence of multiple risk factors for IPF in these patients may have additively or synergistically contributed to the therapeutic outcomes reported in these studies. Again, the benefit of early intervention with bosentan may not have been sufficiently explored in those with mildly elevated PAP in these trials, while bosentan was indeed shown to be efficacious against IP in a subset of patients in the BUILD-1 study, despite the observation that many patients with IPF are associated with rapidly elevated PAP as well as progression of IPF and that elevated PAP is associated with poor prognosis [7]. Against this background, the present bosentan study focused on IPF patients with completely organized honeycomb lung without any pulmonary (including IP) lesions who chiefly complained of symptoms suggestive of progressive respiratory failure, i.e., progressive dyspnea with minimal IPF activity. Thus, the IPF patients with completely organized honeycomb lung alone were enrolled in this study to ensure that the subjects in this study had pathologically stable IPF and required regular hospital visits for treatment. An interim analysis was performed, however, in patients with borderline or less severe PH (25 mmHg ≤ mean pulmonary arterial pressure [mPAP] at rest <35 mmHg and/or mPAP on effort [mPAPOE] ≥ 30 mmHg), following detection of a greater-than-expected significant survival benefit in patients with borderline or less severe PH treated with bosentan at an early phase of the trial when the number of patients enrolled was still small.\n\nBODY.METHODS.STUDY DESIGN AND METHODS:\nThis was a prospective, single-center, interventional, parallel, randomized, open-label study.\n\nBODY.METHODS.TARGET PATIENT POPULATION:\nThis study was conducted in patients with IPF (WHO functional class II, III or IV) who showed no signs of hypoxia during 6-min walk test (6MWT) and therefore had enough functional capacity for ADL and who gave written informed consent to participate in the study.\n\nBODY.METHODS.ELIGIBILITY CRITERIA:\nTo be included in this study, patients had to fulfill all of the following inclusion criteria but none of the following exclusion criteria:\n\nBODY.METHODS.INCLUSION CRITERIA:\n\nPatients aged 20 years old or older (both sexes)Patients diagnosed at this hospital as having IPF (WHO functional class II, III or IV) without hypoxia at rest or during 6MWT (to exclude those with decreased ADL and dyspnea in daily living associated with hypoxia and to minimize the influence of hypoxic pulmonary vasoconstriction [HPV] as a potential cause of PH associated with decreased partial pressure of oxygen in arterial blood [PaO2]) (PaO2 < 60 mmHg)*.*Including those whose hypoxia (PaO2 < 60 mmHg) had been corrected with long-term oxygen therapy (LTOT)Patients with stable IPF who had not required any change of treatment within 3 months prior to study entry, i.e., those confirmed to have completely organized honeycomb lung and no active inflammatory lesion, such as grand glass opacity (GGO) (chronic IIP based on high-resolution computed tomography (CT) findings for which no effective therapy exists); and who presented to our hospital for the first time with symptoms of progressive respiratory failure and had not received any medical treatment for IPF within 3 months prior to their visit.*Excluding those whose progressive respiratory failure required no treatment for IPF itself and those who had an increased LTOT dose as a minimum requirement for progressive respiratory failure.Inpatients and outpatientsPatients who provided written informed consent to participate in this study\n\n\nBODY.METHODS.EXCLUSION CRITERIA:\n\nPatients who had received bosentan or any other drug specific for PAH (e.g., phosphodiestetrase type 5 [PDE-5] inhibitors, endothelin receptor antagonists, or prostaglandin analogs) prior to their enrollmentPatients with any disease that could cause right heart overloadPatients with hypoxia during 6MWT (PaO2 < 60 mmHg)*.Excluded were those whose hypoxia (PaO2 < 60 mmHg) had been corrected with LTOT (i.e., those in whom LTOT is in place to ensure PaO2 > 60 mmHg both at rest and during 6MWT, who were deemed equivalent to IPF patients receiving routine therapy in clinical practice to allow them to be monitored for changes in their condition, prognosis and functional capacity for ADL).\nWomen who were pregnant or might have been pregnant, and who were lactatingPatients with moderate or severe liver disorderPatients receiving treatment with cyclosporine, tacrolimus, or glibenclamideOther patients judged by the investigator to be ineligible for this study (e.g., those with any disease or condition other than IPF that might affect their ADL, such as arrhythmia, LV failure, pulmonary thromboembolism, connective tissue diseases, intervertebral disc herniation, as they were confirmed by history taking, physical examination, chest x-ray, echocardiography [ECG], lung perfusion scintigraphy, and measurements of various parameters conducted during the run-in period).\n\n\nBODY.METHODS.GROUPING OF PATIENTS:\nIn order to include those with minimal IPF activity alone, of all patients first diagnosed with IPF at our hospital based on the presence of chronic f-IIP as confirmed by high-resolution CT findings, those whose chief complaints suggested progressive respiratory failure and who were suffering from progressive dyspnea were evaluated for PAP by right heart catheterization (RHC) and ECG, as well as for right heart function by ECG. According to the current diagnostic criteria, if mPAP at rest is <25 mmHg, the patient is not diagnosed as having PH even if the mPAPOE is ≥30 mmHg. In our study, however, this state was defined as borderline PH representing a very mild form of PH; and besides, 25 mmHg ≤ mPAP at rest <35 mmHg was defined as less severe PH. Since the aim of this study was to evaluate the efficacy and safety of early therapeutic intervention with bosentan in PH, borderline PH or PH was diagnosed if mPAP at rest was ≥25 mmHg and/or mPAPOE was ≥30 mmHg and mPAWP was ≤15 mmHg. Moreover, borderline PH or less severe PH was defined as mPAP <25 mmHg and mPAPOE ≥30 mmHg or 25 mmHg ≤ mPAP <35 mmHg; severe PH was defined as mPAP ≥35 mmHg; and non-borderline PH or PH was defined as non-PH (mPAP <25 mmHg and mPAPOE <30 mmHg).\n\nBODY.METHODS.GROUPING OF PATIENTS.DRUG-TREATED AND UNTREATED PATIENTS:\nAll patients who met the eligibility criteria and gave informed consent to participate in this study were evaluated for PAP and right heart function and those with mPAP at rest ≥25 mmHg and/or mPAPOE ≥30 mmHg were stratified by mPAP: mPAP at rest <35 mmHg vs. ≥ 35 mmHg. Patients in each subgroup were randomly allocated to either bosentan (drug-treated group) or no treatment (untreated group) by the envelope method. Patients diagnosed with non-PH (mPAP at rest <25 mmHg with mPAPOE <30 mmHg) were assigned to the untreated group. The drug-treated group comprised those who were diagnosed at this hospital as having IPF without hypoxia (PaO2 > 60 mmHg) and who gave informed consent to participate in this study after PAP and right heart function assessments. The untreated group comprised those diagnosed at this hospital as having IPF without hypoxia (PaO2 > 60 mmHg) and who gave their informed consent to participate in this study after PAP and right heart function assessments. This group included those with severe PH (mPAP at rest ≥35 mmHg), borderline or less severe PH (25 mmHg ≤ mPAP at rest <35 mmHg and/or mPAPOE ≥30 mmHg), and non-PH (without borderline PH or PH). (Fig. 1; see also Additional file 1).Fig. 1Patient flowchart\n An interim analysis was performed, however, in patients with borderline or less severe PH (25 mmHg ≤ mPAP at rest <35 mmHg and/or mPAPOE ≥30 mmHg), following detection of a greater-than-expected significant survival benefit in patients with borderline or less severe PH treated with bosentan, at an early phase of the trial when the number of patients enrolled was still small. The study required that IPF patients be randomized to drug-treated and untreated groups to investigate their clinical course in real-world settings, with no change of treatment allowed including bosentan for 2 years or until their death as a rule, except for minimal symptomatic therapy (including oxygen volume adjustments required to ensure similar oxygen conditions among the patients), which met none of the exclusion criteria.\n\nBODY.METHODS.TARGET SAMPLE SIZE:\nSee Additional file 2.\n\nBODY.METHODS.OUTCOME MEASURES:\nECG examinations were carried out during the run-in period* and every 6 months thereafter**. Complete two-dimensional, pulsed-wave, color-flow echocardiography was performed using the Toshiba ultrasound system Xario (TOSHIBA MEDICAL SYSTEMS CORPORATION, Tochigi, Japan) as previously described [14–25]. (See also Additional file 3). Doppler measurements were carried out during the run-in period* and every 6 months**. (See also Additional file 3 and Additional file 4: Fig. S1). RHC was carried out during the run-in period* and every 6 months thereafter**. Hemodynamic parameters (systolic PAP [SPAP]; diastolic PAP [DPAP]; mean PAP [mPAP]; systolic PAWP [SPAWP]; diastolic pulmonary capillary wedge pressure [DPAWP]; mean PAWP [mPAWP]; systolic right ventricular pressure [SRVP]; diastolic RVP [DRVP]; mean RVP [mRVP]; systolic right atrial pressure [SRAP]; diastolic RAP [DRAP]; mean RAP [mRAP]; and cardiac output [CO]) and pulmonary vascular resistance (PVR) were measured, with the patient in the supine position, via the internal jugular vein and using a Swan-Ganz continuous cardiac output (CCO) thermodilution flow-directed pulmonary artery catheter (Edwards Lifesciences LLC, USA). Cardiac output was measured by the thermodilution method using a Vigilance hemodynamic monitor (Edwards Lifesciences LLC, USA). Systolic PAP on effort (SPAPOE), diastolic PAP on effort (DPAPOE) and mean PAP on effort (mPAPOE) were measured while patients were clasping and opening both hands repeatedly by putting a full strain on the body. Furthermore, mixed venous blood gas analysis was performed.\n\nBODY.METHODS.SURVIVAL ANALYSIS:\nHospital-free survival and overall survival were determined by the duration of survival from week 0 (start of assessment), i.e., as the date treatment started for the drug-treated group and 2 weeks after RHC for the untreated group. Even for those unable to undergo the periodic assessments due to change of their attending physician, etc., this survival analysis was continued by contacting the patient's current physician to have his/her survival status confirmed. Patients were censored from hospital-free survival if they could no longer continue ambulatory treatment and were admitted to another hospital or if they could no longer present to our hospital for progression of respiratory failure.\n\nBODY.METHODS.ADVERSE EVENTS:\nAll patients were assessed for adverse events during the run-in period* and every 4 weeks thereafter**, as well as based on their medical records on unscheduled visits to our outpatient clinic. Even for those unable to undergo the periodic assessments due to change of their attending physician, adverse events were assessed by contacting their current physician to have these events confirmed.\n\nBODY.METHODS.OTHER PARAMETERS:\nPulmonary function test (PFT) was carried out during the run-in period* and every 6 months thereafter**. (See also Additional file 3). ADL assessments including exercise tolerance test [26–28] were performed during the run-in period* and every 6 months thereafter**. (See also Additional file 3 and Additional file 5: Figure S2). Those in whom LTOT was in place to ensure adequate oxygen inhalation during 6MWT (deemed equivalent to IPF patients receiving routine therapy in clinical practice to allow them to be monitored for changes in their condition, prognosis and functional capacity for ADL) were assessed for treadmill exercise test (TMET) with LTOT in place. Arterial blood gas (ABG), arterial plasma lactate, brain natriuretic peptide (BNP) and N-terminal (NT)-proBNP were determined during the run-in period and every 6 months thereafter. [29] (See also Additional file 3). Hematology, biochemistry and urinalysis were performed during the run-in period* and every 4 weeks thereafter**. *Run-in period: Within 2 weeks after written informed consent was obtained from each patient. **Every 6 months: Consecutive 6 months with a ± 1-week window counting from week 0 (start of assessment) defined as the date drug treatment started for the drug-treated group and 2 weeks after RHC for the untreated group. However, the periodic assessments not conducted in patients as planned based on the attending physician's judgement were deemed acceptable, unless they met any of the criteria for discontinuation of the study (e.g., pneumonia, etc.) (Fig. 1 and Additional file 6: Figure S3).\n\nBODY.METHODS.STUDY DRUG:\nBosentan was administered, as a rule, according to the approved dosage and administration. Bosentan is to be usually initiated in adults at a dose of 62.5 mg twice daily after breakfast and dinner for 4 weeks and increased to a dose of 125 mg twice daily after breakfast and dinner from week 5 of treatment onwards with the dosage adjusted according to the patient's symptoms and tolerability, but not exceeding 250 mg per day. In this study conducted in routine clinical settings, however, it was acceptable to continue treatment at the initial dosage if deemed by the investigator to be appropriate based on the patient's condition (see also Additional file 7).\n\nBODY.METHODS.CONCOMITANT DRUGS AND THERAPIES:\nDrugs allowed for use in the study included drugs intended for the treatment of the underlying disease (IPF) and drugs, other than drugs specific for PAH, for the treatment of PH as required for aggravation of PH. Drugs prohibited for use included cyclosporine, tacrolimus, glibenclamide and other drugs specific for PAH (e.g., PDE-5 inhibitors, endothelin receptor antagonists and prostaglandins) as well as any other investigational drug.\n\nBODY.METHODS.STUDY PERIOD:\nThe study was conducted for 24 months between September 2010 and September 2022 with patient enrollment lasting until January 2020 (see Additional file 8).\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData are expressed as the mean ± standard deviation (SD). Changes from baseline in individual outcome measures were compared between drug-treated and untreated patients, and analyzed for statistical significance. Analysis on paired data was performed using Mann-Whitney U test. Changes in trend over time were analyzed using the Residual Maximum Likelihood (REML) or least squares method. All statistical analyses were performed using JMP version 11.2.1 (SAS Institute Inc., Cary, NC). A two-sided P value of <0.5 was considered to indicate a statistically significant change.\n\nBODY.RESULTS.PATIENTS:\nThis report presents the results of an interim analysis of the IPF patients in this study. A total of 32 IPF patients were enrolled in this study between February 2011 and June 2016, who comprised all the outpatients who had met the study entry criteria. At the time of their initial presentation to our hospital, all patients were confirmed to have chronic fibrotic idiopathic interstitial pneumonia (IIP) based on high-resolution CT findings of completely organized honeycomb lung with basal predominance in bilateral subpleural regions for which no effective therapy exists. Patients chiefly complained of symptoms of progressive respiratory failure. While all patients confirmed to have no progressive pulmonary fibrosis on CT were given detailed explanations as to the potential adverse effects associated with the use of antifibrotics, such as pirfenidone or nitentanib, which has only recently been launched in Japan and indicated for very few patients, as well as the costs due under current health insurance, none were confirmed to have received any treatment specific for IPF (e.g., pirfenidone or nintentanib) within 3 months prior to their enrollment or wished to receive any antifibrotic drug after the first 3 months or later, and none dropped out because of any treatment given, other than symptomatic treatment, for PH as the underlying disease, such as calcium channel blockers. Of these 32 patients, the following 3 patients were excluded from the study: 1 who was found to have cancer during the study, which had probably existed at the time of enrollment (untreated, borderline or less severe PH group), 1 who developed symptoms of disc hernia during the run-in period (non-PH group), and 1 who died from aspiration pneumonia during the run-in period before the start of bosentan therapy (drug-treated, borderline or less severe PH group). The remaining 29 patients who had completed the study or were still on the study treatment were included in the present analyses. Of these 29 patients, 3 (including 1 female) had no borderline PH or PH (non-borderline PH/PH) and the remaining 26 patients with boarderline or less severe PH, or severe PH were randomized to receive or not to receive bosentan therapy. Of these, 13 were in the drug-treated group and the other 13 were in the untreated group, including 1 in each group confirmed to have mPAP at rest ≥35 mmHg (severe PH), and 12 in the drug-treated group (age range, 56–76 years old) and 12 in the untreated group (age range, 51–80 years old) confirmed to have mPAP at rest <35 mmHg (borderline or less severe PH). Patient demographics and characteristics were similar between the untreated, borderline or less severe PH group and the drug-treated, borderline or less severe PH group (Table 1).Table 1Clinical characteristics of subjects with borderline or less severe PH (mPAP <35 mmHg)Untreated borderline or less severe PHDrug-treated borderline or less severe PH\nP\n*\nNo. (male/female)12(8/4)12(9/3)0.66Age (y.o.)70.50 ± 7.9766.92 ± 6.450.11Height (cm)160.04 ± 10.11160.87 ± 10.070.84Weight (kg)62.067 ± 12.1754.95 ± 12.720.25No. of patients with LTOT870.67ADL including exercise tolerance testWHO functional class2.67 ± 0.782.83 ± 0.830.78mMRC score2.42 ± 1.0842.33 ± 1.440.98SGRQ score Symptoms56.10 ± 22.8745.78 ± 28.960.52 Activity61.60 ± 22.3555.18 ± 33.210.98 Impact37.53 ± 23.1134.38 ± 22.180.91 Total49.42 ± 21.2743.93 ± 26.410.77SF36 Physical functioning (PF)45.83 ± 21.4160.42 ± 26.410.11 Role physical (RP)38.58 ± 21.9651.058 ± 39.160.56 Bodily pain (BP)72.17 ± 26.3080.00 ± 25.230.45 General health (GH)40.67 ± 18.3446.75 ± 20.0670.49 Vitality (VT)49.34 ± 20.5558.36 ± 29.230.40 Social functioning (SF)56.25 ± 26.3868.75 ± 33.500.35 Role emotional (RE)62.51 ± 30.04867.36 ± 37.690.52 Mental health (MH)63.75 ± 19.6765. 00 ± 27.880.62Right heart cardiography mPAP (mmHg)20.83 ± 5.7521.17 ± 7.730.93 mPAPOE (mmHg)42.67 ± 12.7842.58 ± 8.870.45 mPAWP (mmHg)6.83 ± 3.796.28 ± 3.570.76 mRVP (mmHg)14.42 ± 3.5814.083 ± 6.570.31 mRAP (mmHg)2.50 ± 1.683.00 ± 2.130.45 CO (L/min)4.80 ± 1.125.10 ± 1.270.82 CI (L/min/m2)2.90 ± 0.563.21 ± 0.630.38 PVR (wood)3.12 ± 1.653.022 ± 2.00310.95 PVRI5.073 ± 2.674.58 ± 2.611.00Mixed venous PHv7.39 ± 0.0297.40 ± 0.0260.45 PvCO2 (mmHg)49.34 ± 6.03648.15 ± 4.220.82 PvO2 (mmHg)36.69 ± 3.8937.55 ± 3.930.60 SVO2 (%)68.77 ± 5.4770.53 ± 5.710.66PFT %VC (%)68.34 ± 16.9269.55 ± 22.620.98 FVC (L)2.0033 ± 0.572.087 ± 0.800.86 %DLCO (%)30.72 ± 16.001927.37 ± 23.760.25TTE ET (msec)299.71 ± 55.95263.83 ± 36.160.18 PAAcT (msec)98.67 ± 32.6594.75 ± 11.650.58 AcT/ET0.33 ± 0.0870.37 ± 0.0600.23 PEP (msec)92.71 ± 12.6587.42 ± 18.620.12 ICT (msec)17.083 ± 19.9621.75 ± 21.730.70 IRT (msec)55.71 ± 45.002352.42 ± 36.490.95 ICT + IRT (msec)87.79 ± 64.5472.82 ± 46.340.69 TEI index0.32 ± 0.270.30 ± 0.250.98 TAPSE(cm)2.32 ± 0.472.27 ± 0.550.75 Diastolic RA area (cm2)8.20 ± 3.2110.64 ± 4.910.29 Diastolic RA major axis (cm)4.35 ± 2.00374.20 ± 2.0180.66 Systolic RA area (cm2)4.74 ± 2.105.46 ± 2.600.60 Systolic RA major axis (cm)2.95 ± 0.992.65 ± 0.550.25 Diastolic RV area (cm2)16.090 ± 6.5715.39 ± 7.960.33 Diastolic RV major axis (cm)6.38 ± 1.286.22 ± 1.0980.49 Systolic RV area (cm2)9.27 ± 3.479.38 ± 4.380.64 Systolic RV major axis (cm)4.98 ± 1.424.95 ± 0.890.60 RVEF (%)58.91 ± 12.4551.98 ± 13.290.13Aortic Blood data at rest pH7.41 ± 0.0277.42 ± 0.0220.25 PO2 (mmHg)76.84 ± 10.09182.46 ± 7.930.11 Aortic oxygen saturation (%)95.02 ± 1.5595.85 ± 1.200.12 BNP (pg/ml)29.42 ± 20.2620.76 ± 13.100.34 NT-proBNP (pg/ml)93.33 ± 60.1569.67 ± 48.210.45 LA (mg/dl)11.82 ± 4.08210.00 ± 3.530.33TMET METS3.55 ± 1.893.96 ± 2.540.81Post-TMET Aortic Blood data Post-TMET pH7.34 ± 0.0617.36 ± 0.0690.45 Post-TMET PCO2 (mmHg)46.94 ± 12.2243.017 ± 6.750.66 Post-TMET PO2 (mmHg)54.075 ± 15.9367.23 ± 14.710.18 Post-TMET oxygen-Sat (%)80.35 ± 18.07790.85 ± 4.420.14 Post-TMET BNP (pg/ml)40.20 ± 34.8835.62 ± 46.660.27 Post-TMETNT-proBNP (pg/ml)102.83 ± 67.48108.67 ± 124.950.64 LA post TMET – LA at rest (mg/dl)24.68 ± 20.01222.82 ± 18.880.98 6MWD246.18 ± 104.27296.63 ± 128.00900.31Post-6 MW Aortic Blood data Post-6MWT pH7.39 ± 0.0217.40 ± 0.0390.15 Post-6 MW-PCO2 (mmHg)41.042 ± 8.3242.64 ± 5.420.53 Post-6 MW-PO2 (mmHg)77.20 ± 30.9872.067 ± 15.790.91 Post-TMET Oxygen-Sat (%)92.58 ± 4.1390.00 ± 8.350.69 Post-6 MW-BNP (pg/ml)34.52 ± 25.6625.080 ± 23.950.33 Post-6 MW-NT-proBNP (pg/ml)98.50 ± 75.1380.67 ± 72.410.47 LA post-6 MW – LA at rest (mg/dl)8.60 ± 8.315.42 ± 8.340.14Data presented as mean ± SD\n*\nP value for Mann-Whitney U test to assess the difference between the untreated and drug-treated patients with borderline or less severe PH\n\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .EXACERBATION OF SUBJECTIVE SYMPTOMS OF DYSPNEA (TABLE :\nOf the 12 untreated patients with borderline or less severe PH, 7 were confirmed to have experienced exacerbation of subjective symptoms of dyspnea based on the data obtained at the cut-off date, with the time to exacerbation of dyspnea being 152.00 ± 89.94 days (mean ± SD). Of 12 the drug-treated patients with borderline or less severe PH, 3 were confirmed to have experienced exacerbation of dyspnea based on the data obtained at the cut-off date, with the time to exacerbation being 259.00 ± 49.87 days (mean ± SD). Proportional hazard analysis showed that the risk ratio of the drug-treated group to the untreated group was 0.32, but with no significant difference.Table 2Adverse events observed in untreated and drug-treated patients with borderline or less severe PHUntreated borderline or less severe PHDrug-treated borderline or less severe PHExacerbation of dyspnea73Time to exacerbation of dyspnea (mean ± SD) (days)152.00 ± 89.94259.00 ± 49.37Increase of the O2 dose52Time to O2 dose increase (mean ± SE) (days)199.00 ± 132.90,335.00 ± 182.43Decrease of the O2 dose01Hospitalization (hospital-free survival)8 (241.50 ± 192.24)2 (239.002 ± 169.00)Death (survival)7 (309.29 ± 195.13)1 (671)Other adverse events3a\n6b\n\na Gastrointestinal hemorrhage (n = 1), pneumonia (n = 1), and ileus (n = 1)\nb Pneumothorax (n = 3), CHF (n = 2), and liver dysfunction (n = 1)\nFig. 2Analysis of survival by adverse event. a Analysis of the time to exacerbation of subjective dyspnea. Among the untreated patients with borderline or less severe PH, the time to exacerbation of dyspnea was 152.00 ± 89.94 days (mean ± SD) in 7 of 12 patients confirmed to have experienced exacerbation of subjective symptoms of dyspnea by the data obtained on the cut-off date. Among the drug-treated patients with borderline or less severe PH, the time to exacerbation of dyspnea was 259.00 ± 49.87 days (mean ± SD) in 3 of 12 patients confirmed to have experienced exacerbation of dyspnea by the data obtained on the cut-off date. Proportional hazard analysis showed that the risk ratio of the drug-treated to untreated groups was 0.58, but with no significant difference noted. The time to exacerbation of dyspnea at the time of analysis was 218.17 ± 35.62 days (mean ± SE) in the untreated group and 290.71 ± 12.036 days in the drug-treated group, but with no significant difference noted. b Analysis of the time to an increase in the dose of O2 (event). Increase of the O2 dose: In the untreated patients with borderline or less severe PH, the time to the dose increase was 199.00 ± 132.90 days (mean ± SD) in 5 of 12 patients confirmed to have required an increase of the dose of O2 by the data obtained on the cutoff date. In the drug-treated patients with borderline or less severe PH, the time to the dose increase was 335.00 ± 182.43 days (mean ± SD) in 3 of 12 patients confirmed to have required an increase of the O2 dose based on the data obtained on the cut-off date. The risk ratio analysis showed that the hazard ratio of the drug-treated to untreated groups was 0.58. The time to O2 dose increase at the time of analysis was 357.71 ± 50.83 days in the untreated group and 438.20 ± 34.61 days in the drug-treated group with no significant difference between the groups, despite the results favoring the drug-treated group. In addition, only 1 drug-treated patient with borderline or less severe PH achieved a decrease of the O2 dose on day 243 due to an improvement of respiratory function. c Hospital-free survival. Of the 12 untreated patients with borderline or less severe PH, 8 were confirmed to have been hospitalized (event) by the data obtained on the cut-off date with the time to hospitalization being 241.50 ± 192.24 days (mean ± SD). Of the 12 drug-treated patients with borderline or less severe PH, 2 was confirmed to have been hospitalized by the data obtained on the cut-off date with the time to hospitalization being 239.002 ± 169.00 days. At the time of survival time analysis, hospital-free survival in the untreated group was 358.87 ± 68.65 days (mean ± SE) (median, 331 days), which was shown to be significantly different from that in the drug-treated group (603.44 ± 50.074) by proportional hazard analysis (hazard ratio [HR] of the drug-treated to untreated groups, 0.10; P = 0.017). d Overall survival. Of the 12 untreated patients with borderline or less severe PH, 7 were confirmed dead (event) by the data obtained on the cut-off date with the time to event being 309.29 ± 195.13 days (mean ± SD); of the drug-treated patients with borderline or less severe PH, 1 was confirmed dead by the data on the cut-off date with the time to event being 671 days. At the time of survival analysis, the time to event in the untreated group was 433.78 ± 66.98 days (mean ± SE), which was significantly different from that in the drug-treated group by proportional hazard analysis (HR of the drug-treated to untreated groups, 0.10; P = 0.0082)\n\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .INCREASE OF O:\nOf the 12 untreated patients with borderline or less severe PH, 5 were confirmed to have required an increase of the O2 dose based on the data obtained on the cut-off date. Of the 12 drug-treated patients with borderline or less severe PH, 3 were confirmed to have required an increase of the O2 dose based on the data obtained on the cutoff date. The risk ratio analysis showed that the hazard ratio of the drug-treated group to the untreated group was 0.58, with the time to O2 dose increase at the time of analysis being 357.71 ± 50.83 days in the untreated group versus 438.20 ± 34.61 days in the drug-treated group, which was not significantly different despite the fact that the results favored the drug-treated group. Only 1 patient with borderline or less severe PH in the drug-treated group achieved a decrease of the O2 dose on day 243 because of improved respiratory function.\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .HOSPITAL-FREE SURVIVAL (TABLE :\nOf the 12 untreated patients with borderline or less severe PH, 8 were confirmed to have been hospitalized based on the data obtained on the cut-off date. In contrast, of the 12 drug-treated patients with borderline or less severe PH, 2 were confirmed to have been hospitalized based on the data obtained on the cutoff date. At the time of survival time analysis, hospital-free survival was 358.87 ± 68.65 days (mean ± SE) (median, 331 days) in the untreated group, which was significantly different from that in the drug-treated group (603.44 ± 50.074 days) as assessed by proportional hazard analysis (hazard ratio of the drug-treated group to the untreated group, 0.19, P = 0.017; log-rank test, P = 0.019; and Wilcoxon test, P = 0.014).\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .OVERALL SURVIVAL (TABLE :\nOf the 12 untreated patients with borderline or less severe PH, 7 were confirmed dead (event) based on the data obtained on the cut-off date. Of the 12 drug-treated patients with borderline or less severe PH, 1 was confirmed dead with the time to event being 671 days. At the time of survival analysis, the time to event was 433.78 ± 66.98 days (mean ± SE) in the untreated group, which was shown to be significantly different from that in the drug-treated group as assessed by proportional hazard analysis (hazard ratio of the drug-treated group to the untreated group, 0.10, P = 0.0082; log-rank test, P = 0.011; and Wilcoxon test, P = 0.011).\n\nBODY.RESULTS.ADVERSE EVENTS (TABLE .CLINICAL COURSE:\nEight of the 12 untreated patients received LTOT. Of the 12 patients, 1 completed the 2-year treatment period, 2 were still on the study (with one having completed regular examinations up to month 12 and the other up to month 18), and 1 was not available for the periodic assessments from month 18 onwards due to change of the attending physician after change of address. Of the remaining 8 patients, 7 were censored from hospital-free survival analysis due to progression of respiratory failure, including 5 and 1 who were not available for the periodic assessments other than mMRC, 6MWD and TMET from months 6 and 18 onwards, respectively, and were later confirmed dead. One patient was confirmed alive at the time of analysis but was not available for the periodic assessments other than mMRC, 6MWD and TMET from month 12 onwards. The remaining 1 patient developed ileus and was confirmed to have died due to a disease other than lung disease at another hospital and was completely excluded from the periodic assessments from month 6 onwards. As for the periodic assessments with mMRC, TMET and 6MWT, of the 11 patients assessed by mMRC at month 6, 9 each were further assessed at month 12 and 8 were further assessed at month 18, and 7 completed the assessments at month 24, including 1 patient who was assessed at months 18 and 24 by contacting the patient's current physician after change of address. Of the 11 patients assessed by TMET and 6MWT at month 6, 8 were further assessed at month 12, and 7 completed the assessments at months 18 and 24. Seven of the 12 drug-treated patients included in the analysis received LTOT. Four patients completed the study after finishing the assessments at month 48 and 1 of the remaining 8 patients withdrew from the study before month 6 due to hepatic dysfunction. Another patient withdrew from the study due to lung cancer detected on day 518. The last two patients were censored from hospital-free survival analysis due to exacerbation of respiratory failure on days 641 and 303, respectively, with the former confirmed dead on day 671. The remaining 4 patients were still on the study treatment (with 1 having completed regular examinations at baseline alone, 1 up to month 6, 2 up to month 12, and 1 up to month 18).\n\nBODY.RESULTS.LUNG FUNCTION AND RHC.DRUG-TREATED PATIENTS WITH BORDERLINE OR LESS SEVERE PH:\nCompared with baseline (Table 1b), significant changes were noted in lung function %DLCO at months 6 and 12 (month 6, +7.011, P = 0.010; month 12, +12.18, P = 0.0025) (See Additional file 9: Figure %DLCO). Compared with baseline (Table 2b), there was a decreasing trend in mPAP at months 6 and 12 although no significant difference was noted (month 6, −2.60, P = 0.098, R = 0.84; month 12, −1.71, P = 0.38, R = 0.83). A similar trend was observed for PVR (month 6, −0.69, P = 0.11, R = 0.88; month 12, −0.41, P = 0.41, R = 0.87). Compared with baseline, there was a significant improvement in mixed venous saturation of oxygen at month 6 (+4.78, P = 0.037, R = 0.45), but no significant change was noted from baseline to month 12. Moreover, significant differences were observed in the drug-treated patients with borderline or less severe PH with regard to changes in mPAP, PVR and PVRI from baseline to month 6 (untreated vs. drug-treated: mPAP, +4.71 vs. -2.60 mmHg, P = 0.0035; PVR, +1.60 vs. -0.69 woods, P = 0.0020) (Fig. 3).Fig. 3Results of RHC. a Comparison of changes in mPAP from baseline to month 6 between untreated and drug-treated patients with borderline or less severe PH. b Comparison of change in PVR from baseline to month 6 between untreated and drug-treated patients with borderline or less severe PH. Significant differences were seen between the untreated and drug-treated patients with borderline or less severe PH with regard to changes in mPAP, and PVR from baseline to month 6 (untreated vs. drug-treated: mean difference in mPAP, 4.71 vs. -2.60 mmHg, P = 0.0035; PVR, +1.60 vs. -0.69 woods, P = 0.0020)\n It will be a long time, however, before comparisons of data can be made between the groups for month 12 onwards, with many untreated patients with borderline or less severe PH having been censored from hospital-free survival analysis with an even smaller number of patients available for the periodic assessments. Results for other assessment parameters (See Additional file 10, Additional file 11: Figure ADL, Additional file 12: Figure TTE, and Additional file 13: Figure Arterial blood analysis). Overall, while drug-treated patients with borderline or less severe PH tended to fare better than untreated patients with borderline or less severe PH, it was difficult to draw any conclusion due to the small number of patients currently available for analysis, especially in untreated patients with borderline or less severe PH. Thus, while the study appears to provide potentially valuable findings at this stage, their relevance and/or validity require to be closely examined when the final data of this trial become available.\n\nBODY.DISCUSSION:\nMany patients with IPF experience a rapid elevation of PAP as well as progression of IPF [7] and elevated PAP is shown to be associated with poor prognosis [7]. Therapies currently available for slowing the progression of fibrosis, such as pirfenidone, cannot be expected to improve IPF [30]. Besides, for any honeycomb lung that has become completely organized, no realistic treatment options are available, other than symptomatic relief with LTOT, to neutralize the progression of respiratory failure. In addition, it remains largely unclear how IPF and associated PH may interact to influence each other. Again, in agreement with the ATS/ERS/JRS/ALAT Clinical Practice Guideline 2015 that remains inconclusive with regard to the effect of drugs for PAH on IPH except in a subset of cases, recommending dual ERAs in patients with IPF recommended as \"worthwhile considerations\" and bosentan as \"a conditional recommendation against use\" [31], our study provided no definite clue as to how IPF and associated PH may interact. Furthermore, while our study did not allow PH and lung fibrosis to be examined for any relationship due to its small sample size, it did show no significant correlation between FVC and mPAP, suggesting that how PH and lung fibrosis interact may not readily lend itself to clarification. Against this background, bosentan was shown to be efficacious in a subset of IIP patients in the BULD-1 study, and this is in contrast to the results of a number of randomized controlled trials [10, 32, 33], including the BUILD-3 and ARTEMIS trials, conducted in a wide range of biopsy-proven IPF patients (where the pathology of IPF studied, including the pathologic activity, varied widely, i.e., those with fibrotic IIP, f-NSIP, elevated PAP and decreased CI), which argued against the use of bosentan in patients with PH and f-IIP. Of note, these studies failed to evaluate bosentan in patients perfectly matched for background IPF, suggesting that early intervention with bosentan in those with mildly elevated PAP may not have been sufficiently explored, while the patient background factors may have additively or synergistically contributed to the unfavorable outcomes reported in these studies. With these considerations in mind, we enrolled patients with completely organized honeycomb lung in this study to ensure that all study subjects were as nearly matched for IPF background as possible and that they had pathologically inactive IPF (i.e., they had no active inflammatory lesion, such as GGO) that would require no change of treatment during the study. Since the subjects of this study were required to regularly visit our hospital, patients who presented to our hospital for the first time with symptoms of progressive respiratory failure were enrolled (those without symptoms of progressive respiratory failure were not eligible for treatment at regular intervals at our hospital and were followed up at some other nearby hospital). The present study included IPF patients with completely organized pulmonary fibrosis alone in an attempt to rule out the potential influence of IPF-associated interstitial inflammation and inter-individual differences in IPF status. As a result, the majority of patients enrolled in the study were found to have borderline to less severe PH. While it remains unclear why these patients comprised the majority, in retrospect, ambulatory patient enrollment may have led to those IPF patients who required no emergency or acute intensive care, i.e., those with only honeycomb lesions accompanied by gradually progressive dyspnea, being included in the study, who could therefore represent a selected segment of the entire IPF population. Thus, the present bosentan study was conducted in IPF patients with progressive dyspnea despite minimal IPF pulmonary lesion activity, thus ruling out the additive or synergistic influence of IPF lesions on progression of dyspnea to unequivocally demonstrate the impact of therapeutic intervention for PH in progressive dyspnea and to investigate treatment-associated pathophysiological changes in cardiac function using ECG, as well as the long-term efficacy and safety of early therapeutic intervention in PH with bosentan, with borderline PH defined as mean PAP on effort (mPAPOE) ≥ 30 mmHg (including mPAP at rest <25 mmHg). At its early phase involving a small number of patients, this study demonstrated a greater-than-expected significant difference in prognosis between bosentan-treated and untreated patients with progressive respiratory failure who were confirmed to have completely organized honeycomb lung. Based on this finding, we performed an ad hoc interim analysis, which demonstrated that bosentan therapy led to a clearly better prognosis in patients with honeycomb lung suffering from symptoms of progressive respiratory failure than that in untreated patients, which was very poor. At the same time, study results reconfirmed that patients with progressive respiratory failure with completely organized honeycomb lung have a really poor prognosis. Although the possible imbalance in patient characteristics between the two groups may have affected the study results, this study adhered to randomization with the envelope method, thus making such possibility rather unlikely. The main limitation of this study is the small number of patients included in the analysis, but this interim report was prepared following detection of a significant difference in prognosis in patients with borderline or less severe PH treated with bosentan when only half the number of patients targeted had completed the study. Again, with this study still at an exploratory stage, we have no sufficient information to determine the sample size, but an earlier bosentan repeated-dose study (AC-052-111 trial) of patients with PAH (WHO functional class III or above) conducted in Japan provided the rationale for the sample size required (i.e., 11 patients required to conduct a two-sided t-test for AUC with two-sided significance level of 5% and 90% power). Given the current state of clinical trials in this field, the sample size of this study appears to be never too small. It is highly likely that the favorable prognosis seen in the bosentan-treated patients with borderline or less severe PH might have been affected by inclusion of those with IPF experiencing a rapid elevation of PAP (not always seen among the untreated patients with borderline or less severe PH undergoing the periodic assessments). It is also possible, however, that rapid PAP elevation may have led to many patients being excluded from periodic assessments, while slowly progressive PAP elevation may have allowed patients to undergo the periodic assessments, although it is difficult to prove one way or the other. Despite this limitation, however, our study has raised the following possibilities. First, targeting patients with completely organized honeycomb lung and symptoms of progressive respiratory failure might help select patients with rapid PAP elevation. Second, the use of bosentan may be associated with improved prognosis in selected patients similar to those included in this study. Moreover, although previous reports have failed to demonstrate a significant difference in prognosis in the entire IPF patient population with any of the drugs tested, as therapeutic options capable of suppressing the pathology of IPF become available in the future, the use of bosentan in combination with any such option might contribute further to improvements in prognosis. In addition, while the study data remain yet to mature at present, the available data demonstrate improvements in exercise tolerance over time in bosentan-treated patients compared with untreated patients thus favoring bosentan therapy, although many untreated patients were censored from hospital-free survival analysis and excluded from the periodic assessments. If these effects on exercise tolerance can be replicated through accumulation of data from continuation of this study, bosentan may have a role to play in protecting against declines in exercise tolerance by working at various levels. Moreover, changes in respiratory conditions among the drug-treated patients suggest that bosentan therapy may have corrected abnormal breath patterns that tended toward hyperventilation. These results suggest a potential role for bosentan in delaying the progression of respiratory failure and that a decrease in respiratory rate from relief of dyspnea may be linked to a trend toward increased PO2, decreased PCO2 and increased PH after exercise tolerance and stress testing, which corresponded to improvements seen in breathing efficiency after stress testing, despite no significant change in 6MWD in which the patients were assessed at the pace each patient felt comfortable. Otherwise, while the study data also suggest improvements in RV function on TTE in patients with borderline or less severe PH treated with bosentan, this finding requires to be examined at the completion of the study (see Additional file 10). Again, while the drug-treated group did not require LTOT or increasing the O2 inhalation dose, the untreated group reached the endpoint of hospital survival in a very short time with progressive dyspnea. Another limitation of the study is that, among the exercises leading to increased PAP during RHC, it included clasping and opening both hands repeatedly with a full strain exerted on the body, which allowed PAP to be monitored but made it rather difficult to continue the exercise during CO measurement thus making PVR, the most critical of all parameters, less amenable to measurement. While the reasons for the observed increases in PAP during exercise in the study can only be surmised, they may include changes in PAP or increases in pulmonary blood flow associated with PVR during exercise [34–36] and intrathoracic pressure associated with straining on the part of the patients being evaluated. A still further limitation of the study is that all patients with lower values of PAP (mPAP <25 or mPAPOE <30) were included in the untreated group, while all patients should have been randomized. Health insurance in Japan made it hardly feasible, however, to design this study as a randomized trial in which patients without PH would also be randomized to bosentan or any other drug specific for PAH. Again, given that patients with IPF might show variable outcomes, e.g., rapid declines, frequent exacerbations, or slow declines, thus representing a heterogeneous population, patients may have had to be enrolled only after their lung function has been shown to be stable for many years. However, it was simply unfeasible to enroll patients only after their lung function had been shown to be stable for some time. Instead, we have further refined the definition of eligible patients as those confirmed to have completely organized honeycomb lung and to have no active inflammatory lesion, such as GGO. In addition, the inclusion criteria initially defined the patient age as ranging between 20 and 40 years old to exclude familial pulmonary fibrosis. Given that this could also exclude IPF, however, the patient age may have been better defined as 40 years old or older and may have led to different outcomes. Indeed, a retrospective analysis of all patients enrolled confirmed that they ranged in age between 51 and 80 years old, which has led us to redefine the patient age as 40 years old or older. While all patients confirmed to have no progressive pulmonary fibrosis on CT were given detailed explanations as to the potential adverse effects associated with the use of antifibrotics, such as pirfenidone or nitentanib, which has only recently been launched in Japan and indicated for very few patients, as well as the costs due under current health insurance, none of these patients had previously received any medical treatment for IPF within 3 months prior to their visit and none wished to receive any antifibrotic after the first 3 months or later. Thus, further study is warranted to investigate whether various treatment options, including combination therapy with bosentan and an antifibrotic, may lead to further improvements in prognosis in these patients.\n\nBODY.CONCLUSIONS:\nThis was an interim report of our ongoing long-term study conducted to evaluate the effects of bosentan, as a PAH-specific drug, on IPF-associated PH based on detailed data analysis. Despite its limitations, the study appears to suggest that the bosentan-treated group fared remarkably better than the untreated group, while it was thought likely that those without borderline PH or PH receiving no treatment were associated with poor prognosis, and those with borderline PH or PH receiving bosentan therapy were associated with better prognosis. Again, study findings suggest that there exists a subset of IPF patients who might benefit from bosentan therapy with regard to improvements in IPF and prognosis. The authors plan to prepare a final report after accrual of further patients required to complete the study.\n\nBODY.ADDITIONAL FILES:\n\nAdditional file 1:Supplementary document on subgroup analysis. Supplementary document on patient grouping. (DOCX 12 kb)\nAdditional file 2:Supplementary data on determination of sample size. Data included as a basis for sample size determination in this study. (DOCX 17 kb)\nAdditional file 3:Supplementary document on parameters. Parameters included for evaluation in this study. (DOCX 19 kb)\nAdditional file 4:Figure S1 Methods for and results of Doppler measurements performed in this study. (PPT 154 kb)\nAdditional file 5:Figure S2 TMET (Treadmill exercise test) protocol as part of the supplementary document on parameters. (PPTX 54 kb)\nAdditional file 6:Figure S3 Schedule for evaluation of parameters in this study. (PDF 1240 kb)\nAdditional file 7:Supplementary information on Guidance for Tracleer Tablets® dosage modification. Guidelines for bosentan dose modification as applied in Japan and used in this study. (DOCX 14 kb)\nAdditional file 8:Supplementary data on the criteria for discontinuation of the study in individual patients. Criteria for study discontinuation in individual patients used in this study. (DOCX 16 kb)\nAdditional file 9:Assessment of time-course changes in %DLCO in Drug-treated patients with borderline or less severe PH. A summary of results for %DLCO. (PPTX 117 kb)\nAdditional file 10:Supplementary results for other parameters. A summary of results for other parameters in this study. (DOCX 18 kb)\nAdditional file 11:Figure ADL. Comparison of changes in mMRC between drug-treated and untreated patients with borderline or less severe PH; Comparison of changes in TMET between drug-treated and untreated patients with borderline PH or less severe PH. (PPTX 190 kb)\nAdditional file 12:Figure TTE. Change in PA AcT from baseline to month 12 in drug-treated patients with borderline or less severe PH. (PPTX 61 kb)\nAdditional file 13:Figure Arterial blood analysis. a. Time-course change in PaO2 at rest in drug-treated patients with borderline or less severe PH; b. Change in post 6MWT aortic pH in drug-treated patients with borderline PH or less severe PH. (PPTX 225 kb)\nAdditional file 14:Supplementary data on procedures regarding informed consent. Procedures for informed consent. (DOCX 14 kb)\nAdditional file 15:Supplementary information regarding compensation in case of trial-related injury or death. Compensation or indemnity for injury or death. Scheme for compensation or indemnity for injury or death in this study. (DOCX 14 kb)\nAdditional file 16:Supplementary information regarding medical expenses. Medical expenses anticipated in this study including those to be borne by subjects. (DOCX 14 kb)\n\n**Question:** Compared to no treatment what was the result of bosentan on hospital-free survival of patients with pulmonary hypertension (PH) secondary to idiopathic pulmonary fibrosis (IPF)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
877
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How to avoid discontinuation of antihypertensive treatment. The experience in São Paulo, Brazil\n\n ABSTRACT.OBJECTIVES:\nTo evaluate the importance of providing guidelines to patients via active telephone calls for blood pressure control and for preventing the discontinuation of treatment among hypertensive patients.\n\nABSTRACT.INTRODUCTION:\nMany reasons exist for non-adherence to medical regimens, and one of the strategies employed to improve treatment compliance is the use of active telephone calls.\n\nABSTRACT.METHODS:\nHypertensive patients (n = 354) who could receive telephone calls to remind them of their medical appointments and receive instruction about hypertension were distributed into two groups: a) \"uncomplicated\" – hypertensive patients with no other concurrent diseases and b) \"complicated\" - severe hypertensive patients (mean diastolic ≥110 mmHg with or without medication) or patients with comorbidities. All patients, except those excluded (n = 44), were open-block randomized to follow two treatment regimens (\"traditional\" or \"current\") and to receive or not receive telephone calls (\"phone calls\" and \"no phone calls\" groups, respectively).\n\nABSTRACT.RESULTS:\nSignificantly fewer patients in the \"phone calls\" group discontinued treatment compared to those in the \"no phone calls\" group (4 vs. 30; p<0.0094). There was no difference in the percentage of patients with controlled blood pressure in the \"phone calls\" group and \"no phone calls\" group or in the \"traditional\" and \"current\" groups. The percentage of patients with controlled blood pressure (<140/90 mmHg) was increased at the end of the treatment (74%), reaching 80% in the \"uncomplicated\" group and 67% in the \"complicated\" group (p<0.000001).\n\nABSTRACT.CONCLUSION:\nGuidance to patients via active telephone calls is an efficient strategy for preventing the discontinuation of antihypertensive treatment.\n\nBODY.INTRODUCTION:\nMost patients with hypertension do not benefit from treatment.1 Data from NHANES/NCHS 1999–2004 showed that 71.8% of individuals were aware of their condition, 61.4% were under current treatment and 35.1% had it under control.2 Although the percentage of individuals with controlled blood pressure in Brazil is unknown, we estimate it to be low because the percentage of patients with their high blood pressure under control is around 30-35% at outpatient clinics that specialize in hypertension, and these clinics do not represent the national reality.3 Many reasons exist for non-adherence to medical regimens, including adverse drug effects, poor instructions, poor provider-patient relationship, poor memory, a patient's disagreement with the need for treatment or inability to afford medication.4,5 One of the strategies employed to improve treatment compliance is the use of active telephone calls6 to patients through which they are given guidance and treatment questions are answered. Using this strategy, several studies have shown an increase in treatment compliance, a higher percentage of blood pressure control6-8 and a decrease in mortality.9-11 Another strategy that aims at improving treatment compliance is the employment of two low-dose medications. Evidence suggests that the combination of two antihypertensive agents provides a higher percentage of blood pressure control due to complementary mechanisms of action. In addition, this results in better tolerability and consequently, better treatment compliance.12-14 The ASCOT study15 (Anglo-Scandinavian Cardiac Outcomes Trial) has indicated a significant reduction in all causes of mortality with the use of current medications for treating hypertension: calcium channel antagonists and angiotensin-converting enzyme inhibitors. It has also been demonstrated that a high incidence of adverse events may decrease treatment compliance. However, in the LIFE16 study, angiotensin receptor antagonists proved to be better at reducing the risks related to cardiovascular events and they resulted in a lower rate of treatment discontinuation when compared to a beta-blocker. In view of these data, we evaluated the importance of providing guidelines to patients via active telephone calls for blood pressure management and preventing treatment discontinuation in hypertensive patients. We used two treatment regimens with low-dose medications that were offered for free to avoid the influence of financial factors. We opted for one regimen called \"traditional\" based on diuretics and beta-blockers and another called \"current\" based on angiotensin II antagonists and calcium channel blockers.\n\nBODY.METHODS:\nThe study participants were selected at the Hypertension Unit, University of São Paulo General Hospital, Nephrology Division, University of São Paulo School of Medicine. The patients had essential hypertension and were able to receive telephone calls to be reminded of their medical appointments and be given guidance about hypertension. Patients were of both genders, from diverse ethnic backgrounds, over 18 years of age, and had body mass indices below 40 kg/m2. The patients were enrolled in the study after signing a free and informed consent form. The study was approved by the Ethics Committee of the Board of Directors of the University of São Paulo School of Medicine. Patients were excluded on several bases, including blood pressure <140/90 mmHg without antihypertensive medication, secondary hypertension, white-coat hypertension with systolic pressures ≥140 mmHg and/or diastolic pressures ≥90 mmHg at the doctor's office and awake mean systolic pressures <135 mmHg or awake mean diastolic pressures <85 mmHg without antihypertensive medication, malignant hypertension and patients with a previous history of hypersensitivity reaction to the study medications. Other exclusion criteria were as follows: pregnant women or nursing mothers, the presence of liver dysfunction evidenced by the patient's clinical history or by one of the liver function tests (levels twice the normal values for alkaline phosphatase, total bilirubin, aspartate aminotransferase), patients with clinical conditions that might interfere with the total conformity with the study or those who might have an increased risk for participating in the study, patients with a history of alcoholism, drug abuse or mental disorders that might invalidate the free and informed consent or limit the patient's ability to meet the protocol rules and patients who had participated in any other studies involving investigational drugs or drugs already marketed within the previous month before enrollment in this study or concomitantly with this study.\n\nBODY.METHODS.BLOOD PRESSURE MEASUREMENT:\nBlood pressure and heart rate measurements were performed on the right upper limb on sitting patients five times by the nursing staff using an appropriately sized cuff with a validated automatic oscillometric device (Dixtal, DX2710, São Paulo, Brazil).17 The mean of the last two measurements was calculated and recorded if the difference between these measurements was less than 4 mmHg. If after the five measurements the difference between the last two was greater than 4 mmHg, the measurement was repeated until the difference between the two measurements was less than 4 mmHg. During the study, blood pressure measurements were always performed in the afternoon by the same nursing staff using the same device. The diagnosis of hypertension was made when the mean values of the last two measurements were as follows: systolic pressure ≥140 mmHg and/or diastolic pressure ≥90 mmHg with or without medication at the initial visit (Visit 0). Patients who were receiving antihypertensive medication at the initial visit and had a systolic pressure <140 mmHg or a diastolic pressure <90 mmHg were re-evaluated eight weeks after discontinuation of their medication and introduction of placebo; these patients were included in the study when the mean values included a systolic pressure ≥140 mmHg and/or a diastolic pressure ≥90 mmHg. Controlled blood pressure was defined in two levels as (with the patient in the seated position) a systolic/diastolic pressure less than 140/90 or 120/80 mmHg. All of the patients underwent Ambulatory Blood Pressure Monitoring (ABPM), which was performed with a validated oscillometric device (SpaceLabs 90207, SpaceLabs Inc, Richmond, WA, USA),18,19 five weeks after the start of the placebo to eliminate cases of patients with white-coat hypertension and, in patients who did not receive placebo, to identify the white-coat effect. According to clinical characteristics, the patients were assigned to two groups: a) \"uncomplicated\"– hypertensive patients without complications and without other concurrent diseases and b) \"complicated\"- patients with severe hypertension (mean diastolic pressure ≥110 mmHg with or without medication) or comorbidities such as diabetes mellitus, renal failure (serum creatinine >1.4 mg/dL), coronary insufficiency, congestive heart failure or prior history of cerebrovascular accident.\n\nBODY.METHODS.PATIENT RANDOMIZATION:\nAll patients, from both the complicated and the uncomplicated groups, were open-block randomized to receive active telephone calls (\"phone calls\" group) or not to receive telephone calls (\"no phone calls\" group) and to follow two treatment regimens, \"traditional\" or \"current\". Thus, after the first visit and randomization, patients from the \"phone calls\" group were invited to enroll by telephone in a program called \"Biosintética Assistance\", supported by Biosintética Laboratory. The patients who subscribed started receiving active telephone calls from appropriately trained operators and also started receiving magazines with health-related information, which were sent periodically by mail. There were six contacts by telephone during the study. During phone calls, the patient was reminded to attend the next visit, and he/she was educated about hypertension and any necessary clarifications about his/her treatment. All patients randomized to the \"phone calls\" group were invited to attend occasional informative lectures with the participation of a multidisciplinary team. At the initial stage of treatment (following eight weeks of treatment with placebo), the \"uncomplicated\" group received one of the following treatment regimens: a) \"traditional\" treatment with 6.25 mg 2x/day hydrochlorothiazide and 25 mg 2x/day atenolol; b) \"current\" treatment with 25 mg 2x/day losartan and 2.5 mg 2x/day amlodipine. If the blood pressure could not be controlled during the visits, the medication doses were doubled or another antihypertensive was added. The \"complicated\" group did not undergo the treatment period with placebo and was randomized to receive either \"traditional\" or \"current\" drug regimens similar to the ones administered to the \"uncomplicated\" group, though the specifics of each condition were carefully considered. The addition of other antihypertensive agents in the \"uncomplicated\" group and the specificities of the patient regimens in the \"complicated\" group were performed according to the guidelines from the V Brazilian Guidelines on Arterial Hypertension.3 All patients were instructed to take their medication every day at 7:00 AM and 7:00 PM, with a variation of up to one hour. A 12-month supply of the necessary medication was supplied to the patients by the physician at the end of the visit at no cost to eliminate the financial factor in this analysis; patients were given enough medication to last between visits. Patients were instructed to bring the remaining pills to their subsequent visits, at which time they were counted by the nursing staff without the patients' knowledge of this procedure. Doctors' visits, preceded by the nursing staff visit, took place every eight weeks for 56 weeks and included measurements of blood pressure, heart rate and weight. The weight was checked with the patient wearing light clothes standing barefoot on a scale (model 2096PP, Toledo do Brasil, São Paulo, Brazil). Study withdrawal was characterized by non-attendance to appointments for up to three months after the scheduled date. Patients who returned to the medical clinic within three months after the scheduled date were allowed to continue in the study and be evaluated in an unscheduled visit. The tests performed during the placebo treatment and after 40 weeks of active treatments included the following: fasting glucose, urea, creatinine, total cholesterol, fractions of cholesterol, triglycerides, uric acid, total bilirubin, creatine phosphokinase (CPK), sodium (Na+), potassium (K+), hemoglobin, thyroid-stimulating hormone (TSH), alkaline phosphatase, aspartate aminotransferase (AST), alanine aminotranferase (ALT) and urinary excretion of sodium in 24 h.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData were analyzed using an analysis of variance (ANOVA) according to the factor of controlled or uncontrolled hypertension. P<0.05 was considered statistically significant.\n\nBODY.RESULTS:\nThe study included 398 patients. Of those, 44 were excluded for the following reasons: a) lack of enrollment in the Biosintética Assistance program (n = 17); b) body mass index >40 kg/m2 (n = 6); c) secondary hypertension (n = 4); d) white-coat hypertension (n = 3); e) white-coat normotension (\"masked hypertension\") (n = 3); f) alcohol use (n = 2); g) classification error (n = 4); h) refusal to do ABPM (n = 2); i) loss of blood pressure measurements from the first visit (n = 1); j) pregnancy (n = 1); k) death (n = 1). A total of 354 patients were evaluated. There were no statistically significant differences in age, gender, skin color, body mass index, blood pressure or heart rate between the groups classified as: \"complicated\" (n = 175) and \"uncomplicated\" (n = 179); \"traditional\" (n = 176) and \"current\" (n = 178); \"phone calls\" (n = 108) and \"no phone calls\" (n = 246) (Table 1). In addition, no differences were seen in marital status, education or occupation.\n\nBODY.RESULTS.A) CONTROL OF BLOOD PRESSURE:\nA marked reduction in blood pressure was seen in patients from both the \"complicated\" and \"uncomplicated\" groups (Table 2). The blood pressure was reduced by 19±10/20±14 mmHg in the \"uncomplicated\" group and by 18±14/22±17 mmHg (systolic/diastolic) in the \"complicated\" group (p>0.05; n = 354). The \"phone calls\" and \"no phone calls\" groups showed significantly lower blood pressures at the end of treatment (p< 0.00001). The \"traditional\" and \"current\" groups showed significantly lower blood pressures at the end of treatment (Figure 1). The percentage of patients with controlled blood pressure (<140/90 mmHg) was also high at the end of treatment (74%). On the other hand, the percentage of patients with blood pressure reduced to <120/80 mmHg was only 29%. There was no difference in the percentage of patients with controlled blood pressure between the \"phone calls\" and \"no phone calls\" groups or in the reduction of blood pressure and the percentage of patients with controlled blood pressure among the \"traditional\" and \"current\" treatments. However, patients in the \"phone calls\" group (80%) had better blood pressure control than those in the \"no phone calls\" group (71%), though the difference was not statistically significant. The \"uncomplicated\" and \"complicated\" groups had statistically significant differences in the percentage of patients with controlled blood pressure (<140/90 mmHg) (80% vs. 67%, respectively; p<0.000001). At the next to last study visit (visit 7), 90% and 66% of patients had blood pressure measurements <140/90 mmHg in the \"uncomplicated\" and \"complicated\" groups, respectively. However, there was no difference in the percentage of patients with blood pressure <120/80 mmHg (31% on the final visit in both groups, Table 3). Among patients with a blood pressure <140/90 mmHg at the final visit, only 3% had received only one type of antihypertensive medication; most patients (56%) received 2 (34%) or 3 (22%) types of antihypertensive medication (Table 4).\n\nBODY.RESULTS.B) TREATMENT DISCONTINUATION:\nA significantly lower number of patients in the \"phone calls\" group quit the treatment compared to the \"no phone calls\" group (4 vs. 30, respectively; Figure 2). There was no difference in the percentage discontinuation in the \"complicated\" and \"uncomplicated\" groups or in the \"current\" and \"traditional\" treatment regimens (p>0.05).\n\nBODY.RESULTS.C) TABLET COUNT:\nPatients in the groups: \"complicated\" and \"uncomplicated\"; \"traditional\" and \"current\"; \"phone calls\" and \"no phone calls\" did not show statistically significant differences in medication intake, as verified by counting the tablets returned at the visits. Compliance was over 85% at all visits. However, patients in the current + uncomplicated + phone call group had the highest rate of compliance (93%). The lowest rate of compliance occurred in the traditional + uncomplicated + no phone call group (85%) (p>0.05).\n\nBODY.RESULTS.D) BODY WEIGHT:\nBlood pressure reductions occurred in the presence of increased body weight (baseline visit: 73±14 kg; final visit: 74±14 kg, p = 0.0008, Table 5).\n\nBODY.RESULTS.E) ADVERSE EVENTS AND LABORATORY TESTS:\nMost of the adverse events that occurred during the study were of light or moderate intensity. Patients in the \"traditional\" group showed significantly more depression symptoms (5 vs. 0) and coughing (30 vs. 4) when compared with patients in the \"current\" treatment group. On the other hand, patients in the \"current\" group had a greater number of complaints of dizziness when compared to patients in the \"traditional\" treatment group (63 vs. 42, respectively). Compared to patients in the \"uncomplicated\" group, patients in the \"complicated\" group presented a larger number the following symptoms: abdominal pain (19 vs. 8), sleepiness (32 vs. 10), cough (23 vs. 11), weakness (23 vs. 11) and blurred vision (6 vs. 0). Conversely, compared to patients in the \"complicated\" group, patients in the \"uncomplicated\" group presented a larger number of complaints such as pain (74 vs. 53) and headache (80 vs. 59). Patients in the \"phone calls\" group reported a 34% rate of symptoms, while a 66% rate of symptoms was reported by patients in the \"no phone calls\" group. The most frequently (>10%) found adverse events were as follows: headache (62.1%), unspecific pain (58.2%), dizziness (45.8%), edema (40.1%), fatigue (15.3%), sleepiness (14.4%), cough (13.3%), precordial pain (13.3%), weakness (11.9%), tachycardia (11.3%), insomnia (10.5%) and paresthesia (10.5%). There were 26 (8%) severe adverse events: death (5), unstable angina (4), acute myocardial infarction (1), transient ischemic attack (1), hypertensive encephalopathy (1), breast cancer (3), bronchospastic crisis (1), cholecystopathy (1), diabetic decompensation (1), syncope (1), nephrectomy (2), trauma (2), peritoneal bypass (1), diarrhea (1) and gastrectomy (1). The results of the laboratory tests performed in the beginning and end of the study are shown in Table 6. There was an increase in the 24-hour urinary sodium excretion from the beginning to the end of the study (120±46 vs. 129±45 mEq/L, respectively; p = 0.000036)\n\nBODY.DISCUSSION:\nAdherence to a medication regimen is generally defined as the extent to which patients take medications as prescribed by their health care providers.20 This can be classified into three aspects of the individual's behavior regarding his/her health: 1) take the medication correctly, 2) follow the professionals' instructions related to diets and lifestyle changes and 3) attend medical visits.21 The discontinuation of medication is considered the most serious type of noncompliance with treatment. In our study, the \"phone calls\" group showed a significantly lower percentage of patients who discontinued treatment compared with the \"no phone calls\" group. A meta-analysis of randomized studies22 suggested that treatment discontinuation is effectively avoided by contacting the patient using letters, telephone calls or e-mails. Márques Contreras et al.6 evaluated the efficacy of telephone and e-mail intervention for therapeutic compliance among 538 patients with mild to moderate hypertension and verified that the group that received phone calls showed higher compliance rates (96.2%) than the group that was only contacted by e-mail (91.3%) or the group that did not receive any intervention (69.2%). Similar data were found in a controlled study recently conducted by Bosworth et al.7 in which the self-reported medication adherence was shown to increase by 9% in the group that had received behavioral and educational intervention (319 hypertensive subjects) through telephone calls versus 1% in the group that had not received intervention (n = 317). Friedman et al.23, through telephone calls associated with usual medical care for six months, compared hypertensive patients who received their usual medical treatment with those who used a monitoring and counseling system. Participants with compliance below 80% of the prescribed medication prior to enrollment in the study showed better compliance with the treatment when the telephone monitoring system was used than did patients who did not use the system (36% vs. 26%, respectively). On the other hand, participants with a compliance level above 80% prior to enrollment in the study did not show any change in compliance, and compliance was comparable in users and non-users of the telephone monitoring system. In our study, regardless of the group to which they were randomized, patients showed compliance rates above 85% at all visits. However, we do not have data regarding medication compliance before enrollment in the study. It is important to point out that treatment compliance is directly related to mortality. A study conducted by Wu et al.9 randomized 442 non-compliant patients who took five or more medications for chronic illnesses to either receive or not receive telephone counseling. The objective was to investigate the influence of this advice for treatment compliance and patient mortality. The study verified that telephone counseling was associated with a 41% reduction of death risk after two years. These authors used scores to classify the levels of compliance of 1011 patients, and they showed that the lower the level of compliance, the higher the risk of mortality in the long term. The influence of the initial selection of antihypertensive medication in regards to compliance with treatment was evaluated by Monane et al.24 The authors verified that the use of newer antihypertensive agents, such as angiotensin converting enzyme inhibitors and calcium channel antagonists, was associated with a compliance ≥80% when compared with thiazide diuretics in patients with cardiac morbidities and multiple visits to physicians. This may have been the result of better tolerability to newer classes of antihypertensive agents. However, in our study, there was no difference in tolerability between the \"traditional\" and \"current\" groups, which could explain the similar compliance rates in both groups. On the other hand, just like in Monane et al.24 we found that fewer patients in the \"complicated\" group had their blood pressure under control in the final visit (67%), whereas in the next to last study visit, 90% of patients in the \"uncomplicated\" group and only 66% in the \"complicated\" group had blood pressure levels below 140/90 mmHg. This may be due to comorbidities and the use of other concurrent treatments. The groups: \"complicated\" and \"uncomplicated\"; \"traditional\" and \"current\"; \"phone calls\" and \"no phone calls\"showed significant reductions in blood pressure from the randomization visit to the final visit, as a high percentage of patients with good blood pressure control (BP <140/90 mmHg) in the total group (74%), regardless of an increase in body weight throughout the study. In our study, the high percentage patients with controlled blood pressure at the end of treatment can be explained by the simple drug regimen, the simple drug acquisition provided by the physician at no cost at the end of each visit, a fixed team of physicians and nurses throughout the study period and the easy access to team members in cases of unexpected occurrences by means of unscheduled visits. We observed that in the next-to-last study visit, the \"uncomplicated\", \"current\", \"traditional\", \"phone calls\" and \"no phone calls\" groups had more patients with controlled blood pressure when compared to the last visit. It is possible that during the last visit the patients were worried about continuing the treatment and receiving the medications. Therefore, guidance provided to patients by means of active telephone calls, brochures and group workshops with healthcare professionals is an efficient strategy for reducing treatment discontinuation, the most severe type of noncompliance with treatment.\n\nBODY.DISCUSSION.PERSPECTIVES:\nIn an attempt to reduce non-adherence to antihypertensive treatment determinants in a developing country, we demonstrated that better blood pressure control was obtained through doctor's visits every two months with the same physician preceded by nursing staff visits with patients receiving all medication necessary for their treatment; this benefit was independent of comorbidities and the type of treatment used. Guidance provided to the patients through active telephone calls, brochures, group workshops with healthcare professionals and donation of all the required medications significantly reduced the treatment discontinuation rates.\n\nBODY.DISCUSSION.SOURCES OF FUNDING:\nThis study received support from \"Biosintetica Assistance\" and Biosintética Laboratory, which donated all of the required medication.\n\n**Question:** Compared to \"no phone calls\" group what was the result of phone calls\" group on The percentage of patients with controlled blood pressure (<140/90 mmHg)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
884
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Adjuvant radiation for salivary gland malignancies is associated with improved survival: A National Cancer Database analysis\n\n ABSTRACT.OBJECTIVE:\nThere are no randomized data to support the use of postoperative radiation for salivary gland malignancies. This study uses the National Cancer Database (NCDB) to describe the epidemiology of salivary gland cancer patients and to investigate whether treatment with adjuvant radiation improves overall survival.\n\nABSTRACT.METHODS AND MATERIALS:\nA total of 8243 patients diagnosed with a major salivary gland cancer were identified from the NCDB. All patients received primary surgical resection of their malignancy. Patients were risk-stratified by adverse features, and overall survival rates were determined. Patients were considered high risk if they had extracapsular extension and/or positive margin after resection. Patients were considered intermediate risk if they did not meet the criteria for high risk but had pT3-T4 disease, pN+ disease, lymphovascular space invasion, adenoid cystic histology, or grade 2-3 disease. Patients who did not meet criteria for high or intermediate risk were considered low risk. Overall patient demographics, disease characteristics, treatment factors, and outcomes were summarized with descriptive statistics and analyzed with STATA.\n\nABSTRACT.RESULTS:\nMedian follow-up in this cohort was 42.4 months, with the median age of 58 years. Patients in the high-risk group had greater survival (hazard ratio [HR], 0.76; P = .002; 95% confidence interval [CI], 0.64-0.91) if they received adjuvant radiation therapy. In contrast, patients in the intermediate- (HR, 1.01; P = .904; 95% CI, 0.85-1.20) and low-risk groups (HR, 0.85; P = .427; 95% CI, 0.57-1.26) did not experience a survival benefit with adjuvant radiation therapy.\n\nABSTRACT.CONCLUSIONS:\nThis large analysis compared survival outcomes between observation and adjuvant radiation alone in risk-stratified patients after resection of major salivary glands using a national database. The use of adjuvant radiation for high-risk major salivary gland cancers appears to offer a survival benefit. Although an overall survival benefit was not seen in low- and intermediate-risk salivary gland cancers, this study could not address impact on local control because of the limitations of the NCDB.\n\nBODY:\nSummaryWe aimed to investigate whether adjuvant radiation improves overall survival in major salivary gland cancers. Furthermore, we wanted to identify which subset of patients benefits the most from receiving radiation. A total of 8243 patients with salivary gland cancer were identified from the National Cancer Database. Patients in the high-risk group had greater survival (hazard ratio, 0.76; P = .002) if they received adjuvant radiation therapy.\n\nBODY.INTRODUCTION:\nMalignant tumors of the salivary gland are rare cancers that represent less than 5% of all newly diagnosed head and neck malignancies.1 These tumors arise from malignant transformation of the cellular constituents of the secretory acini, their ducts, and supporting myoepithelial cells in both major and minor salivary glands. The World Health Organization classifies 24 subtypes of salivary gland cancers (SGCs), with mucoepidermoid carcinoma, adenoid cystic carcinoma, and adenocarcinoma being the most common histologic subtypes.2 The management of SGCs remains a challenge because of their heterogeneous nature, diverse biological behaviors, and low prevalence. In patients without evidence of distant hematogenous metastases, surgery remains the definitive treatment of choice in those with salivary gland malignancies. Outcomes after surgery in early-stage disease are excellent. A retrospective series of patients treated from 1997 to 2002 for parotid gland carcinomas demonstrated 5-year disease-free survival of 86%, with inferior disease-free survival with advanced disease stage according to the 2002 American Joint Committee on Cancer classification.3 Risk factors for disease recurrence were examined in a retrospective cohort of 565 patients treated for malignant salivary gland tumors in the Netherlands.4 The risk of local recurrence was increased in patients with T3 and T4 tumors, incomplete resection, and bone invasion. Regional recurrence was predicted by facial nerve weakness and positive margins on neck dissection, and the risk of distant metastasis was higher among patients with a T3-4, N2-N3 disease, and/or perineural invasion (PNI). There is no randomized prospective data in the literature supporting the use of postoperative radiation therapy (RT) in this patient population. The largest study completed regarding adjuvant RT for SGCs investigated 2170 patients from the Surveillance, Epidemiology, and End Results (SEER) registry. They found that postoperative adjuvant RT is associated with improved overall survival for high-risk disease (defined as poorly differentiated, undifferentiated, and/or locally advanced T3/4 or N+).5 Another SEER database study, however, investigated 1241 cases of parotid acinic cell carcinoma and found that adjuvant RT did not provide a tangible overall survival benefit in this population.6 Although SEER offers many potential benefits, there are limited data regarding radiation dose and treatment details such as fraction number, target, and pathological information, which may account for conflicting results seen in SGCs. The National Cancer Database (NCDB) is a database that offers more robust information regarding pathological and treatment information for patients and thus can allow for more detailed studies of underlying patient characteristics important for prognosis. Most data regarding adjuvant radiation for SGCs are limited to retrospective series that describe improved local control rates compared with surgical resection alone.4, 7, 8 Registry data and single-institution series have consistently demonstrated improving survival among patients with this disease over time, attributed partly to improved surgical techniques and staging techniques from imaging, but largely resulting from the widespread adoption of postoperative RT.5, 9, 10, 11 Despite the absence of compelling supporting prospective data, postoperative RT is considered a standard of care for patients with high-risk features after resection. This study uses one of the largest national SGC databases to investigate whether postoperative RT is associated with an overall survival benefit.\n\nBODY.METHODS.DATA SOURCE:\nThe NCDB is supported by the Commission on Cancer and contains deidentified demographic and clinical patient data on approximately 70% of the cancers diagnosed in the United States annually. We examined patients aged 18 to 90 years with salivary gland malignancy diagnosed from 2004 through 2013.\n\nBODY.METHODS.PATIENT DEMOGRAPHICS:\nPatient demographics (age, race, sex, facility type, and insurance), disease characteristics (primary site, TNM staging), histology, pathologic features, and treatment variables were investigated as predictors of overall survival. Facility type was separated into academic and nonacademic centers (combination of community cancer programs, comprehensive community cancer programs, and other programs). Insurance status was divided into private and government (Medicare and Medicaid) insurance.\n\nBODY.METHODS.INCLUSION AND EXCLUSION CRITERIA:\nWe excluded patients with metastatic disease, in situ disease only, those who did not undergo surgical resection, and those who had unknown overall staging. Clinical staging was used to exclude metastatic patients (coded as cM0 in the NCDB). Furthermore, pathologic stage was required because patients with pathologic metastases would not have been surgical candidates. Patients that received neoadjuvant chemotherapy and/or RT were excluded. Among the patients receiving RT, we excluded those receiving palliative doses (<50 Gy), restricting the doses analyzed from 50 to 70 Gy. We excluded any patients undergoing RT >90 days after resection. We focused our analysis on the most frequently represented histologies only, which included adenocarcinoma, mucoepidermoid carcinoma, adenoid cystic carcinoma, acinar cell carcinoma, epithelial-myoepithelial carcinoma, and carcinoma ex pleomorphic adenoma. After these exclusions, displayed in Figure 1, there were 8243 patients with surgically managed localized salivary cancer of major histologic subtype.Figure 1Consolidated Standards of Reporting Trials flow diagram. NCDB, National Cancer Database; RT, radiation therapy.\n\nBODY.METHODS.RISK STRATIFICATION:\nPatients were stratified into 3 risk groups based on clinical and pathological criteria. Patients were considered high risk if they had extracapsular extension (ECE) and/or positive margin after resection, which is the standard definition of high-risk disease in head and neck cancers. Patients were considered intermediate risk if they did not meet criteria for high risk but had pT3-T4 disease, pN+ disease, lymphovascular space invasion (LVSI), adenoid cystic histology, and/or grade 2-3 disease. Patients who did not meet criteria for high or intermediate risk were considered low risk.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nOverall patient demographics, tumor characteristics, treatment factors, and outcomes were summarized by descriptive statistics. The comparisons between the groups were performed with the use of Student t test for continuous measures and the χ2 test for categorical measures. Overall survival was determined with the use of Kaplan-Meier methods. The hazard ratios (HRs) were estimated with the use of Cox proportional-hazard models. The multivariate Cox proportional hazards model is displayed in eTable 1 (available as supplementary material online only at www.advancesradonc.org). Two-sided P values < .05 were considered to indicate statistical significance. All statistical analyses were performed using STATA software, version 14.0 (StataCorp).\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nFor the total cohort of 8243 patients, the median follow-up time was 42.4 months (range, 0-119) and median age was 58 years. After stratifying this cohort into risk groups based on disease characteristics, there were 2489 low-risk, 3305 intermediate-risk, and 2449 high-risk patients. Table 1 displays detailed cohort characteristics and demographic information, separated based on risk group. Table 2 further elaborates on the adverse features within the intermediate-risk group. Within the high-risk group (n = 2449), 2303 (94.0%) had positive margin and 243 (9.9%) had extracapsular extension.Table 1Patient demographicsTotal cohortLR (n = 2489)IR (n = 3305)HR (n = 2449)P value (LR vs IR)∗ (LR vs HR)∗ (IR vs HR)∗Age (n = 8243)<.001 <60 y4515 (54.8%)1572 (63.2%)1711 (51.8%)1232 (50.3%)<.001 ≥60 y3728 (45.2%)917 (36.8%)1594 (48.2%)1217 (49.7%).82Sex (n = 8243)<.001 Male3764 (45.7%)962 (38.7%)1628 (49.3%)1174 (47.9%)<.001 Female4479 (54.3%)1527 (61.4%)1677 (50.7%)1275 (52.1%).97Race (n = 8218) White6791 (82.6%)2016 (81.1%)2725 (82.7%)2050 (84.1%).053 Black929 (11.3%)308 (12.4%)375 (11.4%)246 (10.1%) Asian301 (3.7%)87 (3.5%)124 (3.8%)90 (3.7%) Other197 (2.4%)74 (2.4%)70 (2.1)53 (2.2%)Ethnicity (n = 8243) Non-Hispanic7268 (88.2%)2201 (88.4%)2913 (88.1%)2154 (88.0%) Hispanic429 (5.2%)132 (5.3%)179 (5.4%)118 (4.8%).55 Other546 (6.6%)156 (6.3%)213 (6.4%)177 (7.2%)Facility type (n = 8243).08 Academic4765 (57.8%)1447 (58.1%)1826 (55.3%)1492 (60.9%).14 Nonacademic3478 (42.2%)1042 (41.9%)1479 (44.8%)957 (39.1%)<.001Insurance Type (n = 8,243) Private4477 (54.3%)1544 (62.0%)1697 (51.4%)1236 (50.5%)<.001 Medicare/Medicaid3363 (40.8%)831 (33.4%)1429 (43.2%)1103 (45.0%)<.001 Uninsured403 (4.9%)114 (4.6%)179 (5.4%)110 (4.5%).40Primary site (n = 8243) Parotid7440 (90.3%)2389 (96.0%)2890 (87.4%)2161 (88.2%)<.001 Submandibular578 (7.0%)60 (2.4%)293 (8.9%)225 (9.2%)<.001 Sublingual76 (0.9%)10 (0.4%)44 (1.3%)22 (0.9%).37 Other149 (1.8%)30 (1.2%)78 (2.4%)41 (1.7%)HR, high-risk group; IR, intermediate-risk group; LR, low-risk group.∗Bonferroni adjustment for multiple comparison was applied.Table 2Adverse features within intermediate risk groupAll patients (n = 3305)RT (n = 1741)No RT (n = 1561)pT3643 (19.5%)363 (20.8%)280 (17.9%)pT4408 (12.3%)271 (15.6%)137 (8.8%)pN+814 (24.6%)498 (28.6%)316 (20.2%)Lymphovascular space invasion234 (7.1%)153 (8.8%)81 (5.2%)Intermediate grade (2)1218 (36.9%)467 (26.8%)751 (48.1%)High grade (3)780 (23.6%)531 (30.5%)249 (16.0%)Undifferentiated grade227 (6.9%)162 (9.3%)65 (4.2%)Adenoid cystic carcinoma histology664 (20.1%)412 (23.7%)252 (16.1%)\n\nBODY.RESULTS.DISEASE CHARACTERISTICS:\nThe majority of patients presented with disease in the parotid gland. A total of 7440 (90.3%) of the total cohort had a primary parotid gland malignancy. The most common disease histology was mucoepidermoid carcinoma, with 3298 (40.0%) cases in the cohort. Remaining disease histology included acinar cell carcinoma (1938 patients, 23.5%), adenoid cystic carcinoma (1188 patients, 14.4%), adenocarcinoma (1044 patients, 12.7%), carcinoma ex pleomorphic adenoma (445 patients, 5.4%), and epithelial-myoepithelial carcinoma (330 patients, 4.0%). Disease primary sites are shown in Table 1. Histology distribution, further TN staging, and disease grades are shown in Table 3. Patients commonly presented with T1 (3182 patients, 39.3%) and N0 disease (6691 patients, 83.5%).Table 3Disease characteristicsTotal cohortLR (n = 2489)IR (n = 3305)HR (n = 2449)P value (LR vs IR)∗ (LR vs HR)∗ (IR vs HR)∗Primary site (n = 8243) Parotid7440 (90.3%)2389 (96.0%)2890 (87.4%)2161 (88.2%)<.001 Submandibular578 (7.0%)60 (2.4%)293 (8.9%)225 (9.2%)<.001 Sublingual76 (0.9%)10 (0.4%)44 (1.3%)22 (0.9%).37 Other149 (1.8%)30 (1.2%)78 (2.4%)41 (1.7%)Histology (n = 8243) Adenocarcinoma1044 (12.7%)115 (4.6%)521 (15.8%)408 (16.7%)<.001 Adenoid cystic1188 (14.4%)0 (0%)664 (20.1%)524 (21.4%)<.001 Mucoepidermoid3298 (40.0%)1043 (41.9%)1412 (42.7%)843 (34.4%)<.001 Acinar cell1938 (23.5%)1058 (42.5%)410 (12.4%)470 (19.2%) Epithelial-myoepithelial carcinoma330 (4.0%)147 (5.9%)105 (3.2%)78 (3.2%) Carcinoma ex pleomorphic adenoma445 (5.4%)126 (5.1%)193 (5.8%)126 (5.1%)Grade (n = 8243) 12117 (25.7%)1382 (55.5%)329 (10.0%)406 (16.6%)NA 21682 (20.4%)0 (0%)1218 (36.9%)464 (19.0%) 31394 (16.9%)0 (0%)780 (23.6%)614 (25.1%) Undifferentiated408 (5.0%)0 (0%)227 (6.9%)181 (7.4%) Unknown2642 (32.1%)1107 (44.5%)751 (22.7%)784 (32.1%)T stage (n = 8009) 13182 (39.3%)1511 (60.7%)1119 (35.0%)552 (22.9%)NA 22227 (27.5%)759 (30.5%)823 (25.7%)645 (26.7%) 31231 (15.2%)0 (0%)643 (20.1%)588 (24.4%) 4872 (10.8%)0 (0%)408 (12.8%)464 (19.2%) X587 (7.3%)219 (8.8%)205 (6.4%)163 (6.8%)N stage (n = 8009) 06691 (83.5%)2489 (100%)2491 (79.5%)1711 (71.7%)NA 1541 (6.8%)0 (0%)305 (9.7%)236 (9.9%) 2435 (18.2%)0 (0%)333 (10.6%)435 (18.2%) 35 (0.21%)0 (0%)4 (0.1%)5 (0.2%)LVSI (n = 3066) Absent2044 (66.7%)698 (80.5%)810 (62.1%)536 (60.0%)NA Present425 (13.9%)0 (0%)234 (17.9%)191 (21.4%) Unknown/indeterminate167 (18.7%)169 (19.5%)167 (18.7%)167 (18.7%)LVSI, lymphovascular space invasion (LVSI coded after 2009 only); NA, not applicable to compare statistically. Other abbreviations as in Table 1.∗Bonferroni adjustment for multiple comparison was applied.\n\nBODY.RESULTS.TREATMENT INFORMATION:\nAfter identifying the initial SGC cohort, patients were separated based on treatment received. In the high-risk group, a higher proportion of patients (71.1%, 1739 patients) received RT after surgical resection compared with the lower risk groups. The intermediate-risk group had 52.7% (1741 patients) receive RT. The low-risk group had 21.5% (534 patients) receive RT. A small proportion of patients received chemotherapy after surgical resection, with an increasing proportion of patients receiving chemotherapy as their disease risk factors increased. A total of 409 (5.0%) in the entire cohort received chemotherapy. The low-risk group had 5 patients (0.2%) receive chemotherapy after surgical resection; in the intermediate-risk group, 160 patients (4.8%) received chemotherapy. Finally, the high-risk group had 244 patients (10.0%) receive chemotherapy after resection. Comorbidity distributions were also determined from the Charlson-Deyo score. The majority of patients had no comorbidities and had a score of 0 (84.3%, 6951 patients). More detailed information regarding treatment characteristics based on risk group are shown in Table 4.Table 4Treatment characteristicsTotal cohortLR (n = 2489)IR (n = 3305)HR (n = 2449)P value (LR vs IR)∗ (LR vs HR)∗ (IR vs HR)∗Chemotherapy use (n = 8243)<.001 Yes409 (5.0%)5 (0.2%)160 (4.8%)244 (10.0%)<.001 No7834 (95.0%)2484 (99.8%)3145 (95.2%)2205 (90.0%)<.001RT use (n = 8236)<.001 Yes4014 (48.7%)534 (21.5%)1741 (52.7%)1739 (71.1%)<.001 No4222 (51.3%)1954 (78.5%)1561 (47.3%)707 (28.9%)<.001Charlson-Deyo score (n = 8243) 06951 (84.3%)2118 (85.1%)2788 (84.4%)2045 (83.5%).279 11069 (13.0%)317 (12.7%)424 (12.8%)328 (13.4%) ≥2223 (2.7%)54 (2.2%)93 (2.8%)76 (3.1%)RT, radiation therapy. Other abbreviations as in Table 1.∗Bonferroni adjustment for multiple comparison was applied.\n\nBODY.RESULTS.OUTCOMES:\nIn the high-risk group, after adjusting for sex, race, ethnicity, insurance type, chemotherapy, and Charlson-Deyo score, receipt of postoperative RT was associated with a reduced risk of death (HR, 0.76; 95% confidence interval [CI], 0.64-0.91) compared with not receiving RT (P = .002). In the intermediate-risk group, after adjusting for sex, race, ethnicity, insurance type, chemotherapy, and Charlson-Deyo score, postoperative RT was not found to be associated with a significantly reduced risk of death (HR, 1.01; 95% CI, 0.85-1.20) compared with not receiving RT (P = .904). In the low-risk group, adjusting for sex, race, ethnicity, insurance type, chemotherapy, and Charlson-Deyo score, differences between patients who received postoperative RT and those who did not were not significant (P = .427); (HR, 0.85; 95% CI, 0.57-1.26). The Kaplan-Meier survival curve is shown in Figure 2. Five-year overall survivals were 93% (95% CI, 91-94) for the low-risk group, no RT; 93% (95% CI, 90-95) for the low-risk group with RT; 77% (95% CI, 74-80) for the intermediate-risk group, no RT; 75% (95% CI, 72-77) for the intermediate-risk group with RT; 64% (95% CI, 59-68) for the high-risk group, no RT; 67% (95% CI, 65-70) for the high-risk group with RT.Figure 2Kaplan-Meier survival curves. Five-year overall survivals: low risk, no RT (LR), 93% (95% CI, 91-94); low-risk RT (LR-RT), 93% (95% CI, 90-95); intermediate risk, no RT (IR), 77% (95% CI, 74-80); intermediate-risk RT IR-RT, 75% (95% CI, 72-77); high risk, no RT (HR), 64% (95% CI, 59-68); high-risk RT (HR-RT), 67% (95% CI, 65-70). Dx, diagnosis.\n\nBODY.DISCUSSION:\nRT is a local treatment, and thus its benefit is often thought to be an improvement in local control rather than an overall survival benefit. To our knowledge, this analysis is the largest study reported to date that compares survival outcomes evaluating the role of adjuvant RT in salivary gland cancers based on adverse prognostic features. A recent study investigated 2210 SGC patients from the NCDB and found that postoperative chemoradiation therapy did not confer a survival benefit over RT alone.12 Our study uses the NCDB to ask more fundamental clinical questions, specifically whether adjuvant RT improves overall survival irrespective of chemotherapy, and to identify the specific subset of patients who stand to benefit from it. Our data suggest that only high-risk patients had a statistically significant improvement in overall survival when they received postoperative RT. For the high-risk group, the HR was 0.76 when patients received RT after surgical resection of their primary malignancy compared with patients who did not receive postoperative adjuvant RT. The intermediate-risk group had a HR of 1.01. In the low-risk group, the HR was 0.85, but this was not statistically significant. Here, we define our high-risk group based on generally accepted standards used in other head and neck cancers, which includes positive surgical margin and ECE. Individual randomized studies established that these consensus markers of high-risk disease included a variety of risk factors, although positive margin and ECE emerged as the most important indicators.13, 14 Our intermediate-risk group is an amalgam of the lesser but still important risk factors from those trials, including LVSI and pT3-pT4 staging as well as higher-risk features of SGCs in general, including adenoid cystic histology. There is, however, a gray area when it comes to grouping these malignancies, and we acknowledge that factors such as number of positive nodes may affect the perception of high risk or intermediate risk in some clinicians' view. Patients within the intermediate-risk group with more advanced features, such as pT4, pN+, LVSI, high or undifferentiated grade, and adenoid cystic histology, more frequently received RT (Table 2). This may confound the survival data for the intermediate group. Furthermore, it is possible for patients to present with multiple intermediate-risk factors that may increase their overall disease risk. These patients may have been more likely to receive adjuvant radiation, further confounding the survival data for the intermediate-risk group. Conversely, patients in our study who were considered low risk simply based on the absence of these features may have been in a higher risk group in reality given the presence of features, such as PNI or close margins, that are not captured in the NCDB. Finally, RT is used for local control, which is not captured in this study. Despite the lack of overall survival benefit for lower risk groups, low- and intermediate-risk patients may potentially benefit from RT because of improved local disease control. The NCDB confers several benefits over other population databases, including a larger sample size, broader inclusion of ages, detailed pathologic assessment, and the availability of in-depth RT details including dose and fractionation. The NCDB has several limitations, however. Although it provides extensive information regarding patient demographics, extent of disease, treatment regimens, and long-term overall survival, there is no central review of the data. All centers participating in the NCDB are Commission on Cancer–designated centers, which lends credibility to its overall fidelity. The database also does not include local recurrence data; therefore, we are unable to assess local control after treatment. Furthermore, PNI was not reported on the NCDB, which may have allowed for a clearer investigation to the importance of PNI in terms of prognosis. In our study, we adjusted for all known significant variables; however, it is possible that imbalances exist in unknown or uncaptured variables that may alternatively explain the improved survival associated with postoperative RT in the high- and intermediate-risk groups. Finally, the NCDB does not capture data regarding treatment toxicity, which makes it difficult to assess the true cost-benefit ratio of adding RT to patients' treatment regimens. Our study demonstrates an overall survival benefit in SGC patients undergoing adjuvant radiation with high-risk adverse features. In the low- and intermediate-risk groups, although a survival benefit was not seen, a benefit in terms of locoregional control could not be assessed. In particular, the intermediate-risk group warrants further study to better understand how various risk factors interact with each other to determine which patients benefit the most from adjuvant radiation. Regarding adjuvant chemoradiation in this setting, the benefit of concurrent cisplatin along with RT is being prospectively studied with Radiation Therapy Oncology Group 1008. As results from this trial are awaited, this study provides further rationale for the consideration of adjuvant RT in patients with SGC and adverse features.\n\n**Question:** Compared to major salivary gland cancer in high risk group without adjuvant radiation therapy what was the result of major salivary gland cancer in high risk group plus adjuvant radiation therapy on survival?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
885
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Adjuvant radiation for salivary gland malignancies is associated with improved survival: A National Cancer Database analysis\n\n ABSTRACT.OBJECTIVE:\nThere are no randomized data to support the use of postoperative radiation for salivary gland malignancies. This study uses the National Cancer Database (NCDB) to describe the epidemiology of salivary gland cancer patients and to investigate whether treatment with adjuvant radiation improves overall survival.\n\nABSTRACT.METHODS AND MATERIALS:\nA total of 8243 patients diagnosed with a major salivary gland cancer were identified from the NCDB. All patients received primary surgical resection of their malignancy. Patients were risk-stratified by adverse features, and overall survival rates were determined. Patients were considered high risk if they had extracapsular extension and/or positive margin after resection. Patients were considered intermediate risk if they did not meet the criteria for high risk but had pT3-T4 disease, pN+ disease, lymphovascular space invasion, adenoid cystic histology, or grade 2-3 disease. Patients who did not meet criteria for high or intermediate risk were considered low risk. Overall patient demographics, disease characteristics, treatment factors, and outcomes were summarized with descriptive statistics and analyzed with STATA.\n\nABSTRACT.RESULTS:\nMedian follow-up in this cohort was 42.4 months, with the median age of 58 years. Patients in the high-risk group had greater survival (hazard ratio [HR], 0.76; P = .002; 95% confidence interval [CI], 0.64-0.91) if they received adjuvant radiation therapy. In contrast, patients in the intermediate- (HR, 1.01; P = .904; 95% CI, 0.85-1.20) and low-risk groups (HR, 0.85; P = .427; 95% CI, 0.57-1.26) did not experience a survival benefit with adjuvant radiation therapy.\n\nABSTRACT.CONCLUSIONS:\nThis large analysis compared survival outcomes between observation and adjuvant radiation alone in risk-stratified patients after resection of major salivary glands using a national database. The use of adjuvant radiation for high-risk major salivary gland cancers appears to offer a survival benefit. Although an overall survival benefit was not seen in low- and intermediate-risk salivary gland cancers, this study could not address impact on local control because of the limitations of the NCDB.\n\nBODY:\nSummaryWe aimed to investigate whether adjuvant radiation improves overall survival in major salivary gland cancers. Furthermore, we wanted to identify which subset of patients benefits the most from receiving radiation. A total of 8243 patients with salivary gland cancer were identified from the National Cancer Database. Patients in the high-risk group had greater survival (hazard ratio, 0.76; P = .002) if they received adjuvant radiation therapy.\n\nBODY.INTRODUCTION:\nMalignant tumors of the salivary gland are rare cancers that represent less than 5% of all newly diagnosed head and neck malignancies.1 These tumors arise from malignant transformation of the cellular constituents of the secretory acini, their ducts, and supporting myoepithelial cells in both major and minor salivary glands. The World Health Organization classifies 24 subtypes of salivary gland cancers (SGCs), with mucoepidermoid carcinoma, adenoid cystic carcinoma, and adenocarcinoma being the most common histologic subtypes.2 The management of SGCs remains a challenge because of their heterogeneous nature, diverse biological behaviors, and low prevalence. In patients without evidence of distant hematogenous metastases, surgery remains the definitive treatment of choice in those with salivary gland malignancies. Outcomes after surgery in early-stage disease are excellent. A retrospective series of patients treated from 1997 to 2002 for parotid gland carcinomas demonstrated 5-year disease-free survival of 86%, with inferior disease-free survival with advanced disease stage according to the 2002 American Joint Committee on Cancer classification.3 Risk factors for disease recurrence were examined in a retrospective cohort of 565 patients treated for malignant salivary gland tumors in the Netherlands.4 The risk of local recurrence was increased in patients with T3 and T4 tumors, incomplete resection, and bone invasion. Regional recurrence was predicted by facial nerve weakness and positive margins on neck dissection, and the risk of distant metastasis was higher among patients with a T3-4, N2-N3 disease, and/or perineural invasion (PNI). There is no randomized prospective data in the literature supporting the use of postoperative radiation therapy (RT) in this patient population. The largest study completed regarding adjuvant RT for SGCs investigated 2170 patients from the Surveillance, Epidemiology, and End Results (SEER) registry. They found that postoperative adjuvant RT is associated with improved overall survival for high-risk disease (defined as poorly differentiated, undifferentiated, and/or locally advanced T3/4 or N+).5 Another SEER database study, however, investigated 1241 cases of parotid acinic cell carcinoma and found that adjuvant RT did not provide a tangible overall survival benefit in this population.6 Although SEER offers many potential benefits, there are limited data regarding radiation dose and treatment details such as fraction number, target, and pathological information, which may account for conflicting results seen in SGCs. The National Cancer Database (NCDB) is a database that offers more robust information regarding pathological and treatment information for patients and thus can allow for more detailed studies of underlying patient characteristics important for prognosis. Most data regarding adjuvant radiation for SGCs are limited to retrospective series that describe improved local control rates compared with surgical resection alone.4, 7, 8 Registry data and single-institution series have consistently demonstrated improving survival among patients with this disease over time, attributed partly to improved surgical techniques and staging techniques from imaging, but largely resulting from the widespread adoption of postoperative RT.5, 9, 10, 11 Despite the absence of compelling supporting prospective data, postoperative RT is considered a standard of care for patients with high-risk features after resection. This study uses one of the largest national SGC databases to investigate whether postoperative RT is associated with an overall survival benefit.\n\nBODY.METHODS.DATA SOURCE:\nThe NCDB is supported by the Commission on Cancer and contains deidentified demographic and clinical patient data on approximately 70% of the cancers diagnosed in the United States annually. We examined patients aged 18 to 90 years with salivary gland malignancy diagnosed from 2004 through 2013.\n\nBODY.METHODS.PATIENT DEMOGRAPHICS:\nPatient demographics (age, race, sex, facility type, and insurance), disease characteristics (primary site, TNM staging), histology, pathologic features, and treatment variables were investigated as predictors of overall survival. Facility type was separated into academic and nonacademic centers (combination of community cancer programs, comprehensive community cancer programs, and other programs). Insurance status was divided into private and government (Medicare and Medicaid) insurance.\n\nBODY.METHODS.INCLUSION AND EXCLUSION CRITERIA:\nWe excluded patients with metastatic disease, in situ disease only, those who did not undergo surgical resection, and those who had unknown overall staging. Clinical staging was used to exclude metastatic patients (coded as cM0 in the NCDB). Furthermore, pathologic stage was required because patients with pathologic metastases would not have been surgical candidates. Patients that received neoadjuvant chemotherapy and/or RT were excluded. Among the patients receiving RT, we excluded those receiving palliative doses (<50 Gy), restricting the doses analyzed from 50 to 70 Gy. We excluded any patients undergoing RT >90 days after resection. We focused our analysis on the most frequently represented histologies only, which included adenocarcinoma, mucoepidermoid carcinoma, adenoid cystic carcinoma, acinar cell carcinoma, epithelial-myoepithelial carcinoma, and carcinoma ex pleomorphic adenoma. After these exclusions, displayed in Figure 1, there were 8243 patients with surgically managed localized salivary cancer of major histologic subtype.Figure 1Consolidated Standards of Reporting Trials flow diagram. NCDB, National Cancer Database; RT, radiation therapy.\n\nBODY.METHODS.RISK STRATIFICATION:\nPatients were stratified into 3 risk groups based on clinical and pathological criteria. Patients were considered high risk if they had extracapsular extension (ECE) and/or positive margin after resection, which is the standard definition of high-risk disease in head and neck cancers. Patients were considered intermediate risk if they did not meet criteria for high risk but had pT3-T4 disease, pN+ disease, lymphovascular space invasion (LVSI), adenoid cystic histology, and/or grade 2-3 disease. Patients who did not meet criteria for high or intermediate risk were considered low risk.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nOverall patient demographics, tumor characteristics, treatment factors, and outcomes were summarized by descriptive statistics. The comparisons between the groups were performed with the use of Student t test for continuous measures and the χ2 test for categorical measures. Overall survival was determined with the use of Kaplan-Meier methods. The hazard ratios (HRs) were estimated with the use of Cox proportional-hazard models. The multivariate Cox proportional hazards model is displayed in eTable 1 (available as supplementary material online only at www.advancesradonc.org). Two-sided P values < .05 were considered to indicate statistical significance. All statistical analyses were performed using STATA software, version 14.0 (StataCorp).\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nFor the total cohort of 8243 patients, the median follow-up time was 42.4 months (range, 0-119) and median age was 58 years. After stratifying this cohort into risk groups based on disease characteristics, there were 2489 low-risk, 3305 intermediate-risk, and 2449 high-risk patients. Table 1 displays detailed cohort characteristics and demographic information, separated based on risk group. Table 2 further elaborates on the adverse features within the intermediate-risk group. Within the high-risk group (n = 2449), 2303 (94.0%) had positive margin and 243 (9.9%) had extracapsular extension.Table 1Patient demographicsTotal cohortLR (n = 2489)IR (n = 3305)HR (n = 2449)P value (LR vs IR)∗ (LR vs HR)∗ (IR vs HR)∗Age (n = 8243)<.001 <60 y4515 (54.8%)1572 (63.2%)1711 (51.8%)1232 (50.3%)<.001 ≥60 y3728 (45.2%)917 (36.8%)1594 (48.2%)1217 (49.7%).82Sex (n = 8243)<.001 Male3764 (45.7%)962 (38.7%)1628 (49.3%)1174 (47.9%)<.001 Female4479 (54.3%)1527 (61.4%)1677 (50.7%)1275 (52.1%).97Race (n = 8218) White6791 (82.6%)2016 (81.1%)2725 (82.7%)2050 (84.1%).053 Black929 (11.3%)308 (12.4%)375 (11.4%)246 (10.1%) Asian301 (3.7%)87 (3.5%)124 (3.8%)90 (3.7%) Other197 (2.4%)74 (2.4%)70 (2.1)53 (2.2%)Ethnicity (n = 8243) Non-Hispanic7268 (88.2%)2201 (88.4%)2913 (88.1%)2154 (88.0%) Hispanic429 (5.2%)132 (5.3%)179 (5.4%)118 (4.8%).55 Other546 (6.6%)156 (6.3%)213 (6.4%)177 (7.2%)Facility type (n = 8243).08 Academic4765 (57.8%)1447 (58.1%)1826 (55.3%)1492 (60.9%).14 Nonacademic3478 (42.2%)1042 (41.9%)1479 (44.8%)957 (39.1%)<.001Insurance Type (n = 8,243) Private4477 (54.3%)1544 (62.0%)1697 (51.4%)1236 (50.5%)<.001 Medicare/Medicaid3363 (40.8%)831 (33.4%)1429 (43.2%)1103 (45.0%)<.001 Uninsured403 (4.9%)114 (4.6%)179 (5.4%)110 (4.5%).40Primary site (n = 8243) Parotid7440 (90.3%)2389 (96.0%)2890 (87.4%)2161 (88.2%)<.001 Submandibular578 (7.0%)60 (2.4%)293 (8.9%)225 (9.2%)<.001 Sublingual76 (0.9%)10 (0.4%)44 (1.3%)22 (0.9%).37 Other149 (1.8%)30 (1.2%)78 (2.4%)41 (1.7%)HR, high-risk group; IR, intermediate-risk group; LR, low-risk group.∗Bonferroni adjustment for multiple comparison was applied.Table 2Adverse features within intermediate risk groupAll patients (n = 3305)RT (n = 1741)No RT (n = 1561)pT3643 (19.5%)363 (20.8%)280 (17.9%)pT4408 (12.3%)271 (15.6%)137 (8.8%)pN+814 (24.6%)498 (28.6%)316 (20.2%)Lymphovascular space invasion234 (7.1%)153 (8.8%)81 (5.2%)Intermediate grade (2)1218 (36.9%)467 (26.8%)751 (48.1%)High grade (3)780 (23.6%)531 (30.5%)249 (16.0%)Undifferentiated grade227 (6.9%)162 (9.3%)65 (4.2%)Adenoid cystic carcinoma histology664 (20.1%)412 (23.7%)252 (16.1%)\n\nBODY.RESULTS.DISEASE CHARACTERISTICS:\nThe majority of patients presented with disease in the parotid gland. A total of 7440 (90.3%) of the total cohort had a primary parotid gland malignancy. The most common disease histology was mucoepidermoid carcinoma, with 3298 (40.0%) cases in the cohort. Remaining disease histology included acinar cell carcinoma (1938 patients, 23.5%), adenoid cystic carcinoma (1188 patients, 14.4%), adenocarcinoma (1044 patients, 12.7%), carcinoma ex pleomorphic adenoma (445 patients, 5.4%), and epithelial-myoepithelial carcinoma (330 patients, 4.0%). Disease primary sites are shown in Table 1. Histology distribution, further TN staging, and disease grades are shown in Table 3. Patients commonly presented with T1 (3182 patients, 39.3%) and N0 disease (6691 patients, 83.5%).Table 3Disease characteristicsTotal cohortLR (n = 2489)IR (n = 3305)HR (n = 2449)P value (LR vs IR)∗ (LR vs HR)∗ (IR vs HR)∗Primary site (n = 8243) Parotid7440 (90.3%)2389 (96.0%)2890 (87.4%)2161 (88.2%)<.001 Submandibular578 (7.0%)60 (2.4%)293 (8.9%)225 (9.2%)<.001 Sublingual76 (0.9%)10 (0.4%)44 (1.3%)22 (0.9%).37 Other149 (1.8%)30 (1.2%)78 (2.4%)41 (1.7%)Histology (n = 8243) Adenocarcinoma1044 (12.7%)115 (4.6%)521 (15.8%)408 (16.7%)<.001 Adenoid cystic1188 (14.4%)0 (0%)664 (20.1%)524 (21.4%)<.001 Mucoepidermoid3298 (40.0%)1043 (41.9%)1412 (42.7%)843 (34.4%)<.001 Acinar cell1938 (23.5%)1058 (42.5%)410 (12.4%)470 (19.2%) Epithelial-myoepithelial carcinoma330 (4.0%)147 (5.9%)105 (3.2%)78 (3.2%) Carcinoma ex pleomorphic adenoma445 (5.4%)126 (5.1%)193 (5.8%)126 (5.1%)Grade (n = 8243) 12117 (25.7%)1382 (55.5%)329 (10.0%)406 (16.6%)NA 21682 (20.4%)0 (0%)1218 (36.9%)464 (19.0%) 31394 (16.9%)0 (0%)780 (23.6%)614 (25.1%) Undifferentiated408 (5.0%)0 (0%)227 (6.9%)181 (7.4%) Unknown2642 (32.1%)1107 (44.5%)751 (22.7%)784 (32.1%)T stage (n = 8009) 13182 (39.3%)1511 (60.7%)1119 (35.0%)552 (22.9%)NA 22227 (27.5%)759 (30.5%)823 (25.7%)645 (26.7%) 31231 (15.2%)0 (0%)643 (20.1%)588 (24.4%) 4872 (10.8%)0 (0%)408 (12.8%)464 (19.2%) X587 (7.3%)219 (8.8%)205 (6.4%)163 (6.8%)N stage (n = 8009) 06691 (83.5%)2489 (100%)2491 (79.5%)1711 (71.7%)NA 1541 (6.8%)0 (0%)305 (9.7%)236 (9.9%) 2435 (18.2%)0 (0%)333 (10.6%)435 (18.2%) 35 (0.21%)0 (0%)4 (0.1%)5 (0.2%)LVSI (n = 3066) Absent2044 (66.7%)698 (80.5%)810 (62.1%)536 (60.0%)NA Present425 (13.9%)0 (0%)234 (17.9%)191 (21.4%) Unknown/indeterminate167 (18.7%)169 (19.5%)167 (18.7%)167 (18.7%)LVSI, lymphovascular space invasion (LVSI coded after 2009 only); NA, not applicable to compare statistically. Other abbreviations as in Table 1.∗Bonferroni adjustment for multiple comparison was applied.\n\nBODY.RESULTS.TREATMENT INFORMATION:\nAfter identifying the initial SGC cohort, patients were separated based on treatment received. In the high-risk group, a higher proportion of patients (71.1%, 1739 patients) received RT after surgical resection compared with the lower risk groups. The intermediate-risk group had 52.7% (1741 patients) receive RT. The low-risk group had 21.5% (534 patients) receive RT. A small proportion of patients received chemotherapy after surgical resection, with an increasing proportion of patients receiving chemotherapy as their disease risk factors increased. A total of 409 (5.0%) in the entire cohort received chemotherapy. The low-risk group had 5 patients (0.2%) receive chemotherapy after surgical resection; in the intermediate-risk group, 160 patients (4.8%) received chemotherapy. Finally, the high-risk group had 244 patients (10.0%) receive chemotherapy after resection. Comorbidity distributions were also determined from the Charlson-Deyo score. The majority of patients had no comorbidities and had a score of 0 (84.3%, 6951 patients). More detailed information regarding treatment characteristics based on risk group are shown in Table 4.Table 4Treatment characteristicsTotal cohortLR (n = 2489)IR (n = 3305)HR (n = 2449)P value (LR vs IR)∗ (LR vs HR)∗ (IR vs HR)∗Chemotherapy use (n = 8243)<.001 Yes409 (5.0%)5 (0.2%)160 (4.8%)244 (10.0%)<.001 No7834 (95.0%)2484 (99.8%)3145 (95.2%)2205 (90.0%)<.001RT use (n = 8236)<.001 Yes4014 (48.7%)534 (21.5%)1741 (52.7%)1739 (71.1%)<.001 No4222 (51.3%)1954 (78.5%)1561 (47.3%)707 (28.9%)<.001Charlson-Deyo score (n = 8243) 06951 (84.3%)2118 (85.1%)2788 (84.4%)2045 (83.5%).279 11069 (13.0%)317 (12.7%)424 (12.8%)328 (13.4%) ≥2223 (2.7%)54 (2.2%)93 (2.8%)76 (3.1%)RT, radiation therapy. Other abbreviations as in Table 1.∗Bonferroni adjustment for multiple comparison was applied.\n\nBODY.RESULTS.OUTCOMES:\nIn the high-risk group, after adjusting for sex, race, ethnicity, insurance type, chemotherapy, and Charlson-Deyo score, receipt of postoperative RT was associated with a reduced risk of death (HR, 0.76; 95% confidence interval [CI], 0.64-0.91) compared with not receiving RT (P = .002). In the intermediate-risk group, after adjusting for sex, race, ethnicity, insurance type, chemotherapy, and Charlson-Deyo score, postoperative RT was not found to be associated with a significantly reduced risk of death (HR, 1.01; 95% CI, 0.85-1.20) compared with not receiving RT (P = .904). In the low-risk group, adjusting for sex, race, ethnicity, insurance type, chemotherapy, and Charlson-Deyo score, differences between patients who received postoperative RT and those who did not were not significant (P = .427); (HR, 0.85; 95% CI, 0.57-1.26). The Kaplan-Meier survival curve is shown in Figure 2. Five-year overall survivals were 93% (95% CI, 91-94) for the low-risk group, no RT; 93% (95% CI, 90-95) for the low-risk group with RT; 77% (95% CI, 74-80) for the intermediate-risk group, no RT; 75% (95% CI, 72-77) for the intermediate-risk group with RT; 64% (95% CI, 59-68) for the high-risk group, no RT; 67% (95% CI, 65-70) for the high-risk group with RT.Figure 2Kaplan-Meier survival curves. Five-year overall survivals: low risk, no RT (LR), 93% (95% CI, 91-94); low-risk RT (LR-RT), 93% (95% CI, 90-95); intermediate risk, no RT (IR), 77% (95% CI, 74-80); intermediate-risk RT IR-RT, 75% (95% CI, 72-77); high risk, no RT (HR), 64% (95% CI, 59-68); high-risk RT (HR-RT), 67% (95% CI, 65-70). Dx, diagnosis.\n\nBODY.DISCUSSION:\nRT is a local treatment, and thus its benefit is often thought to be an improvement in local control rather than an overall survival benefit. To our knowledge, this analysis is the largest study reported to date that compares survival outcomes evaluating the role of adjuvant RT in salivary gland cancers based on adverse prognostic features. A recent study investigated 2210 SGC patients from the NCDB and found that postoperative chemoradiation therapy did not confer a survival benefit over RT alone.12 Our study uses the NCDB to ask more fundamental clinical questions, specifically whether adjuvant RT improves overall survival irrespective of chemotherapy, and to identify the specific subset of patients who stand to benefit from it. Our data suggest that only high-risk patients had a statistically significant improvement in overall survival when they received postoperative RT. For the high-risk group, the HR was 0.76 when patients received RT after surgical resection of their primary malignancy compared with patients who did not receive postoperative adjuvant RT. The intermediate-risk group had a HR of 1.01. In the low-risk group, the HR was 0.85, but this was not statistically significant. Here, we define our high-risk group based on generally accepted standards used in other head and neck cancers, which includes positive surgical margin and ECE. Individual randomized studies established that these consensus markers of high-risk disease included a variety of risk factors, although positive margin and ECE emerged as the most important indicators.13, 14 Our intermediate-risk group is an amalgam of the lesser but still important risk factors from those trials, including LVSI and pT3-pT4 staging as well as higher-risk features of SGCs in general, including adenoid cystic histology. There is, however, a gray area when it comes to grouping these malignancies, and we acknowledge that factors such as number of positive nodes may affect the perception of high risk or intermediate risk in some clinicians' view. Patients within the intermediate-risk group with more advanced features, such as pT4, pN+, LVSI, high or undifferentiated grade, and adenoid cystic histology, more frequently received RT (Table 2). This may confound the survival data for the intermediate group. Furthermore, it is possible for patients to present with multiple intermediate-risk factors that may increase their overall disease risk. These patients may have been more likely to receive adjuvant radiation, further confounding the survival data for the intermediate-risk group. Conversely, patients in our study who were considered low risk simply based on the absence of these features may have been in a higher risk group in reality given the presence of features, such as PNI or close margins, that are not captured in the NCDB. Finally, RT is used for local control, which is not captured in this study. Despite the lack of overall survival benefit for lower risk groups, low- and intermediate-risk patients may potentially benefit from RT because of improved local disease control. The NCDB confers several benefits over other population databases, including a larger sample size, broader inclusion of ages, detailed pathologic assessment, and the availability of in-depth RT details including dose and fractionation. The NCDB has several limitations, however. Although it provides extensive information regarding patient demographics, extent of disease, treatment regimens, and long-term overall survival, there is no central review of the data. All centers participating in the NCDB are Commission on Cancer–designated centers, which lends credibility to its overall fidelity. The database also does not include local recurrence data; therefore, we are unable to assess local control after treatment. Furthermore, PNI was not reported on the NCDB, which may have allowed for a clearer investigation to the importance of PNI in terms of prognosis. In our study, we adjusted for all known significant variables; however, it is possible that imbalances exist in unknown or uncaptured variables that may alternatively explain the improved survival associated with postoperative RT in the high- and intermediate-risk groups. Finally, the NCDB does not capture data regarding treatment toxicity, which makes it difficult to assess the true cost-benefit ratio of adding RT to patients' treatment regimens. Our study demonstrates an overall survival benefit in SGC patients undergoing adjuvant radiation with high-risk adverse features. In the low- and intermediate-risk groups, although a survival benefit was not seen, a benefit in terms of locoregional control could not be assessed. In particular, the intermediate-risk group warrants further study to better understand how various risk factors interact with each other to determine which patients benefit the most from adjuvant radiation. Regarding adjuvant chemoradiation in this setting, the benefit of concurrent cisplatin along with RT is being prospectively studied with Radiation Therapy Oncology Group 1008. As results from this trial are awaited, this study provides further rationale for the consideration of adjuvant RT in patients with SGC and adverse features.\n\n**Question:** Compared to major salivary gland cancer in low/intermediate risk group without adjuvant radiation therapy what was the result of major salivary gland cancer in low/intermediate risk group plus adjuvant radiation therapy on survival?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
887
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effects of Vitamin D Supplementation on Signaling Pathway of Inflammation and Oxidative Stress in Diabetic Hemodialysis: A Randomized, Double-Blind, Placebo-Controlled Trial\n\n ABSTRACT:\nObjective: This study was carried out to determine the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic hemodialysis (HD) patients. Methods: This randomized double-blind placebo-controlled clinical trial was conducted among 60 diabetic HD patients. Subjects were randomly allocated into two groups to intake either vitamin D supplements at a dosage of 50,000 IU (n = 30) or placebo (n = 30) every 2 weeks for 12 weeks. Gene expression of inflammatory cytokines and biomarkers of oxidative stress were assessed in peripheral blood mononuclear cells (PBMCs) of diabetic HD patients with RT-PCR method. Results: Results of RT-PCR indicated that after the 12-week intervention, compared to the placebo, vitamin D supplementation downregulated gene expression of interleukin (IL)-1β (P = 0.02), tumor necrosis factor alpha (TNF-α) (P = 0.02) and interferon gamma (IFN-γ) (P = 0.03) in PBMCs of diabetic HD patients. Additionally, vitamin D supplementation, compared to the placebo, downregulated gene expression of transforming growth factor beta (TGF-β) (P = 0.04), protein kinase C (PKC) (P = 0.001), and mitogen-activated protein kinases 1 (MAPK1) (P = 0.02) in PBMCs of diabetic HD patients. Although not significant, vitamin D supplementation let to a reduction of nuclear factor kappa B (NF-kB) (p = 0.75) expression in PBMCs isolated from diabetic patients compared to the placebo group. There was no statistically significant change following supplementation with vitamin D on gene expression of interleukin (IL)-4, IL-6, and vascular endothelial growth factor (VEGF) in PBMCs of diabetic HD patients. Conclusions: Overall, we found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on few gene expression related to inflammation and oxidative stress. Clinical trial registration: IRCT201701035623N101. Registered on January 8, 2017.\n\nBODY.INTRODUCTION:\nIncreased inflammatory cytokines as a necessary part of chronic kidney disease (CKD) has been identified and associated with cardiovascular disease (CVD), protein-energy malnutrition, all-cause mortality, and decreased glomerular filtration rate (GFR) (Stenvinkel et al., 1999; Akchurin and Kaskel, 2015). Increased biomarkers of inflammation, such as interleukin-1beta (IL-1β) and tumor necrosis factor alpha (TNF-α) were negatively linked to the measures of kidney action and directly with albuminuria (Gupta et al., 2012). Furthermore, various factors, including increased oxygen metabolism, uremic toxicity, increased inflammatory factors, lack of antioxidant vitamins and microelements, and dialysis procedure in patients with CKD would result in oxidative stress (Stepniewska et al., 2015). Prior studies have documented that hypovitaminosis D is very prevalent in individuals with CKD (de Boer, 2008; Cupisti et al., 2015). In a study by Bhan et al. (2010) hypovitaminosis D was reported 50–90% in dialysis subjects. On the other hand, beneficial effects of vitamin D on biochemical variables have reported in hemodialysis (HD) patients (Ibrahim et al., 2015), subjects with type 2 diabetes mellitus (T2DM) (Calvo-Romero and Ramiro-Lozano, 2016) and polycystic ovary syndrome (Maktabi et al., 2017). Few studies have evaluated the effects of vitamin D on gene expression related to inflammation and oxidative stress. Choi et al. (2013) shown significant reductions in gene expression of interleukin 6 (IL-6) and tumor necrosis factor alpha (TNF-α) in vitamin D-treated rats. In addition, in vitro studies have shown that the release of TNF-α can be inhibited by calcitriol in a dose-dependent fashion (Müller et al., 1992). Another study indicated that vitamin D through vitamin D receptor (VDR) suppressed TNF-α-induced nuclear factor kappa B (NF-κB) activation and IL-6 up-regulation in epithelial cell isolated from transgenic mice and colitis animal model (Liu et al., 2013). However, another study supported no significant changes in TNF-α and IL-6 levels in high-dose vitamin D administration in healthy overweight individuals following a 12-week progressive resistance exercise training program (Carrillo et al., 2012). The molecular mechanism of vitamin D in modulating inflammatory cytokines and oxidative stress are still unclear. Mitogen-activated protein kinases (MAPKs) participate in signaling mechanisms in responses to pro-inflammatory factors as well as NF-κB are thought to play an important function in the regulation of pro-inflammatory cytokines in cellular responses (Baldassare et al., 1999). To our knowledge, data on the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic HD patients are limited and controversial. The purpose of this study was, therefore, conducted to evaluate the effects of vitamin D supplementation on signaling pathway of inflammation and oxidative stress in diabetic HD patients.\n\nBODY.METHODS.TRIAL DESIGN AND PARTICIPANTS:\nThis randomized double-blind placebo-controlled clinical trial, registered in the Iranian website for registration of clinical trials as http://www.irct.ir:IRCT201701035623N101, was carried out among 60 diabetic HD subjects aged 18-80 years who were referred to the Akhavan Clinic in Kashan, Iran, from January 2017 to April 2017. This research was carried out in accordance with the Declaration of Helsinki and the Research Ethics Committee of Kashan University of Medical Sciences (KAUMS) approved the study protocol, and informed consent was taken from all subjects. The main exclusion criteria from the study were as follows: taking vitamin D, antioxidant and/or anti-inflammatory supplements such as vitamins E and C, omega-3 fatty acids, and taking immunosuppressive medications within 3 months prior to the enrollment in the study.\n\nBODY.METHODS.STUDY DESIGN:\nAt first, all patients were randomized into two groups to receive either 50,000 IU of vitamin D or placebo (n = 30 in each group) every 2 weeks for 12 weeks. Vitamin D supplements and placebos were produced by Zahravi Pharmaceutical Company, Tabriz, Iran, and Barij Essence Pharmaceutical Company, Kashan, Iran, respectively. In the current study, we used the above-mentioned dose of vitamin D based on the tolerable upper intake level (4,000 IU/day) of vitamin D (Ross et al., 2011), as well as dosage recommended in previous studies among HD patients (Armas et al., 2013; Mose et al., 2014). Compliance to the vitamin D consumption was evaluated through determination of serum 25 (OH) vitamin D values and asking subjects to return the medication containers. Both dietary 3-day food records [analyzed by nutritionist IV software (First Databank, San Bruno, CA)] and physical activity records were taken at baseline, weeks 3, 6, 9, and 12 of the intervention.\n\nBODY.METHODS.ASSESSMENT OF ANTHROPOMETRIC MEASURES:\nBody weight and height were quantified in a fasting status using a digital scale (Seca, Hamburg, Germany) at baseline and after the 12-week intervention. Body mass index (BMI) was calculated by weight and height measurements [weight (kg)/height (m2)].\n\nBODY.METHODS.ASSESSMENT OF OUTCOMES:\nPrimary outcomes were inflammatory cytokines expression and secondary outcomes were biomarkers of oxidative stress expression. We selected the following inflammatory cytokines and proteins [IL-1β, IL-4, IL-6,TNF-α, interferon gamma (IFN-γ), transforming growth factor beta (TGF-β), protein kinase C (PKC), MAPK1, vascular endothelial growth factor (VEGF) and NF-κB] as they play a very important role in signaling pathways of oxidative stress (Geraldes and King, 2010; Baker et al., 2011; Tanti et al., 2012; Chen et al., 2015).\n\nBODY.METHODS.ISOLATION OF PERIPHERAL BLOOD MONONUCLEAR CELLS (PBMCS):\nAt baseline and endpoint of the intervention, 15 mL samples of fasting blood were taken at reference laboratory of KAUMS, Kashan, Iran. PBMCs were isolated from blood by centrifugation through a Ficoll-Hypaque (Denholm and Wolber, 1991) followed by Percoll. In summary, 10 mL of blood with an equal volume of phosphate-buffered saline (PBS) was mixed (Dagur and McCoy, 2015). Layer diluted blood over Ficoll-Hypaque solution, using 3 parts diluted blood to 2 parts Ficoll-Hypaque and centrifuged 30 min at 500 × g, 25°C. Then, using a sterile Pasteur pipet, carefully collected PBMCs, and located at the interface between the plasma (upper layer) and Ficoll-Hypaque solution (Dagur and McCoy, 2015). Cells to a 50 mL centrifuge tube were transferred, then after adding 10 mL PBS, were centrifuged 10 min at 400 × g, 4°C. The supernatant was discarded and was resuspended cells in 4 mL PBS. Following previous step, 1.65 mL of 10× Hanks balanced salt solution with 10 mL Percoll was mixed. Eight mL Percoll solution with 4 mL mononuclear cells in a silanized 10 × 1.5 cm round-bottom polypropylene tube was mixed (Dagur and McCoy, 2015). Mixing thoroughly by inverting tube three or four times was done. Then, 25 min at 370 × g, 25°C was centrifuged. PBMCs were aspirated by gentle pipetting into a clean test tube. After centrifugation, monocytes appear as a cloudy layer in the top 5 mm of the gradient (Dagur and McCoy, 2015).\n\nBODY.METHODS.RNA EXTRACTION AND REAL-TIME PCR:\nTo RNA extraction, we used the RNX-plus kit (Cinnacolon, Tehran, Iran). RNA were extracted just after lymphocyte extraction from fresh 5 mL peripheral blood and less than 1 million cells was used for RNA extraction with the trizol reagent (Invitrogen, USA). Following extraction of the total RNAs from each sample, RNA quantification were performed by UV spectrophotometer. Each samples OD 260/280 ratio between 1.7 and 2.1 was intended showing no contamination with both protein and DNA (Dunkley et al., 2008). The isolated RNA was reverse transcribed to cDNA library using moloney murine leukemia virus reverse transcriptase. Gene expression of IL-1β, IL-4, IL-6,TNF-α, IFN-γ, TGF-β, PKC, MAPK1, VEGF, and NF-κB were evaluated by quantitative RT-PCR, using the LightCycler technology (Roche Diagnostics, Rotkreuz, Switzerland) with SYBR green detection and Amplicon Kit (Table 1). Glyceraldehyde-3-phosphate dehydrogenase primers were used as housekeeping gene. To design primers, Primer Express Software (Applied Biosystems, Foster City) and Beacon designer software (Takaposizt, Tehran, Iran) were used. Relative transcription levels were calculated by the method of Pffafi or 2−ΔΔCT. In the current study, we quantified gene expression levels related to inflammation and oxidative stress in PBMCs from diabetic HD patients. PBMCs from venous blood samples are the most available tissue for analysis of gene expression (Mizuarai et al., 2010). In addition, gene expression levels related to inflammation and oxidative stress in PBMCs are more accurate and may provide more valuable information than plasma concentrations (Mizuarai et al., 2010). Few studies have previously assessed gene expression levels of inflammation and oxidative stress in PBMCs from diabetic patients (Xavier et al., 2015; Mazloom et al., 2016). Table 1 Specific primers used for real-time quantitative PCR. Gene Primer Product size (bp) Annealing temperature (C) GAPDH F: GAGTCAACGGATTTGGTCGT 223 58 R: TTGATTTTGGAGGGATCTCG IL1-β F: GATGGCTTATTACAGTGGCAATG 137 59 R: AGTGGTGGTCGGAGATTCG IL-4 F: CTCACAGAGCAGAAGAACAC 221 60 R: TGGTTGGCTTCCTTCACAG IL-6 F: GTAGTGAGGAACAAGCCAGAG 286 60 R: TGACCAGAAGAAGGAATGCC TNF-α F: GAGCCAGCTCCCTCTATTTATG 187 60 R: CTACATGGGAACAGCCTATTGT IFN-γ F: GGTTCTCTTGGCTGTTACTG 250 59.5 R: TGTCTTCCTTGATGGTCTCC VEGF F: TGCAGATTATGCGGATCAAACC 150 58.5 R:TGCATTCACATTTGTTGTGCTGTAG NFK-β F: GCTGAGTCCTGCTCCTTC 202 58 R: GTCTATTTGCTGCCTTGTGG TGF-β F: ACTACTACGCCAAGGAGGTC 250 59 R: CGGTTGCTGAGGTATCGC PKC F: CGTCCTCATTGTCCTCGTAAG 249 59 R: TCATTCCTGCTGGTCAAATCC MAPK1 F: GGAACAGCACCTCCACTATTT 226 60 R: GCCACAATGTCTGCGTATCT GAPDH, glyceraldehyde-3-Phosphate dehydrogenase; IL-1β, interleukin-1β; IFN-γ, interferon gamma; MAPK1, mitogen-activated protein kinases 1; NF-κB, nuclear factor kappa B; PBMCs, peripheral blood mononuclear cells; PKC, protein kinase C; TNF-α, tumor necrosis factor alpha; TGF-β, transforming growth factor beta; VEGF, vascular endothelial growth factor . \n\nBODY.METHODS.RANDOMIZATION:\nRandomization assignment was carried out using computer-generated random numbers. Randomization and allocation concealment were conducted by the researchers and participants and were carried out by a trained staff at the clinic.\n\nBODY.METHODS.STATISTICAL METHODS:\nThe Kolmogorov-Smirnov test was applied to control the normal distribution of variables. Independent sample t-test was used to establish changes in anthropometric measures and dietary intakes between the two groups. To determine the effects of vitamin D administration on inflammation and oxidative stress expression, we used independent sample t-test. P < 0.05 were considered statistically significant. All statistical analyses conducted using the Statistical Package for Social Science version 18 (SPSS Inc., Chicago, Illinois, USA).\n\nBODY.RESULTS:\nSixty diabetic HD patients [vitamin D and placebo (n = 30 each group)] completed the trial (Figure 1). On average, the rate of compliance in our study was high, such that more than 100% of capsules were taken throughout the study in both groups. No side effects were reported following supplementation of vitamin D in diabetic HD patients throughout the study. Potential side effects of vitamin D supplementation monitored by evaluating the serum levels of calcium every month, as well as, monitoring clinical symptoms among participants including, anorexia, nausea, and vomiting. Figure 1Summary of patient flow diagram. Distribution of gender, participants' mean age, height, baseline weight and BMI as well as their change, and years of dialysis of study participants were not statistically different between the two groups (Table 2). Table 2 General characteristics of study participants. Placebo group ( n = 30) Vitamin D group ( n = 30) P a GENDER (%) Male 20 (66.7) 19 (63.4) 0.78 † Female 10 (33.3) 11 (36.6) TYPE OF DIABETES (%) Type 1 2 (6.7) 2 (6.7) 1.00 † Type 2 28 (93.3) 28 (93.3) Smoking (%) 3 (10.0) 3 (10.0) 1.00 † Duration of DM (year) 18.9 ± 6.0 19.1 ± 5.4 0.87 Insulin therapy (%) 30 (100.0) 30 (100.0) 1.00 † Age (y) 60.7 ± 14.3 57.1 ± 13.4 0.31 Height (cm) 162.5 ± 8.0 163.5 ± 8.1 0.65 Weight at study baseline (kg) 71.2 ± 15.7 69.5 ± 11.8 0.65 Weight at end-of-trial (kg) 71.2 ± 15.7 70.1 ± 11.9 0.75 Weight change (kg) 0.008 ± 1.3 0.5 ± 1.3 0.12 BMI at study baseline (kg/m 2 ) 27.0 ± 6.0 26.1 ± 4.5 0.51 BMI at end-of-trial (kg/m 2 ) 27.0 ± 5.9 26.3 ± 4.5 0.61 BMI change (kg/m 2 ) 0.002 ± 0.5 0.2 ± 0.5 0.11 Years on dialysis 3.7 ± 1.0 3.9 ± 1.2 0.56 Cancer (%) 2 (6.7) 2 (6.7) 1.00 † CAD (%) 22 (73.3) 21 (70.0) 0.77 † CVD (%) 5 (16.7) 6 (20.0) 0.73 † Hypertension (%) 28 (93.3) 29 (96.7) 0.55 † Data are means ± SDs . a Obtained from independent t-test . † Obtained from Pearson Chi-square test . CAD, coronary artery disease; CVD, cerebrovascular disease; DM, diabetes mellitus . Considering the 3-day dietary records obtained during the intervention, there was no significant difference in terms of dietary macro- and micro-nutrient intakes between vitamin D and placebo groups (Data not shown). After 12 weeks of intervention, compared to the placebo, vitamin D supplementation significantly increased serum 25-OH-vitamin D (+11.1 ± 4.2 vs. +0.5 ± 3.9 ng/mL, P < 0.001). Results of RT-PCR indicated that compared to the placebo, vitamin D supplementation downregulated gene expression of IL-1β (P = 0.02), TNF-α (P = 0.02), and IFN-γ (P = 0.03) in PBMCs of diabetic HD patients (Figure 2). There was no statistically significant change following supplementation with vitamin D on gene expression of IL-4 and IL-6 in PBMCs of diabetic HD patients. Figure 2Effect of 12-week supplementation with vitamin D or placebo on expression ratio of TNF-α, IFN-γ, IL-1β, IL-4, and IL-6 genes in PBMCs of diabetic HD subjects. P-value was obtained from independent sample t-test. Data are means ± standard deviation. Additionally, vitamin D supplementation, compared to the placebo, downregulated gene expression of TGF-β (P = 0.04), PKC (P = 0.001), and MAPK1 (P = 0.02) in PBMCs of diabetic HD patients (Figure 3). Although not significant, vitamin D supplementation let to a reduction of NF-kB (p = 0.75) expression in PBMCs isolated from diabetic patients compared to the placebo group. We did not observe any significant change following supplementation with vitamin D on gene expression of VEGF in PBMCs of diabetic HD patients. Figure 3Effect of 12-week supplementation with vitamin D or placebo on expression ratio of PKC, MAPK1, TGF-β, NF-κB, and VEGF genes in PBMCs of diabetic HD subjects. P-value was obtained from independent sample t-test. Data are means ± standard deviation.\n\nBODY.DISCUSSION:\nWe found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on few gene expression related to inflammation and oxidative stress. To our knowledge, this is the first report of the effects of vitamin D supplementation on gene expression related to inflammation and oxidative stress in diabetic HD patients. HD subjects are susceptible to several metabolic complications, including increased inflammation and oxidative stress (Kotur-Stevuljevic et al., 2012). Our data demonstrated that vitamin D supplementation for 12 weeks to diabetic HD subjects downregulated gene expression of IL-1β, TNF-α, and IFN-γ compared to the placebo, but did not affect gene expression of IL-4 and IL-6. A study in healthy endurance-trained runners has demonstrated the inverse relationship between vitamin D and TNF-α levels (Willis et al., 2012). Furthermore, in line with our findings, 50 mug/day of vitamin D supplementation for 9 months was able to suppress the production of TNF-α in subjects with congestive heart failure (Schleithoff et al., 2006). 1,25-(OH)2D3 dose-dependently also suppressed the production of IL-α, IL-6, and TNF-α by Escherichia coli lipopolysaccharide-stimulated monocytes (Müller et al., 1992). However, one study has documented no significant changes in TNF-α and IL-6 concentrations in high-dose vitamin D supplementation-healthy overweight populations after 12 weeks (Carrillo et al., 2012). Chronic inflammation in CKD is not only associated with macro- and micro-cardiovascular outcomes, such as atherosclerosis, but is also one of the main players in the progress of malnutrition/protein-energy wasting, which in turn resulted in the description of the malnutrition-inflammation-cachexia syndrome in CKD/ESRD patients (Kalantar-Zadeh, 2005). Hypoalbuminemia result from chronic inflammation and malnutrition is strongly related to mortality in dialysis populations (Kalantar-Zadeh et al., 2005; de Mutsert et al., 2009). Pro-inflammatory factors may directly lead to anorexia via the effect on the brain. Furthermore, inflammatory cytokines, particularly IL-6, may be related to depression in CKD/ESRD patients, which by itself is a predictor of morbidity and mortality (Taraz et al., 2015) and may result in decreasing nutrient intake. Therefore, vitamin D due to its useful effects on inflammatory cytokines may be benefit to decrease clinical and metabolic symptoms in diabetic HD patients. This study demonstrated that vitamin D supplementation for 12 weeks downregulated gene expression of TGF-β, PKC, and MAPK1 in PBMCs of diabetic HD patients, but did not influence gene expression of VEGF and NF-Kβ. Supporting our study, vitamin D administration at a dosage of 50,000 IU/week for 8 weeks in VD-deficient subjects with PCOS significantly decreased the bioavailability of TGF-β1 (Irani et al., 2015). In addition, 1,25-(OH)2D3 decreased gene expression of MAPK phosphorylation in macrophages (Xu et al., 2016). In another study, vitamin D protected human endothelial cells from ionizing radiation induced/oxidative stress by modulating the MAPKs/SirT1 axis (Marampon et al., 2016). Doxercalciferol also significantly decreased PKCα levels suggesting that PKC-mediated cardiac hypertrophy may be related to vitamin D deficiency (Choi et al., 2011). Excessive evidence exists suggesting that MAPK signaling pathways are critical for the synthesis and amplification of inflammatory factors (Carter et al., 1999). Especially, MAPK activation was closely associated with cytokine transcription and translation, which in turn make the target of anti-inflammatory treatment (Kaminska, 2005). However, calcitriol treatment had no significant effect on gene expression of TGF-β in microcardiovascular endothelial cells (Gonzalez-Curiel et al., 2016). In addition, both the low (1 nM) and the high (100 nM) doses of vitamin D3 used in vitro for 48 h did not able to restore the decreased gene expression of PKC isoenzymes in the T cells of systemic lupus erythematosus subjects (Czifra et al., 2014). Oxidative stress in patients with CKD may generate by uremic toxicity, persistent inflammatory state, deficiency of vitamins and microelements, and dialysis procedure itself (Stepniewska et al., 2015). Increased biomarkers of oxidative stress and free radicals in HD subjects are correlated with increased CVD risk and a large number of diseases related to uremia (Taccone-Gallucci et al., 2010).\n\nBODY.CONCLUSIONS:\nOverall, we found that vitamin D supplementation for 12 weeks among diabetic HD patients had beneficial effects on some gene expression related to inflammation and oxidative stress. This suggests that vitamin D supplementation may confer advantageous therapeutic potential for HD patients. Further research is needed in other participants and for longer periods to determine the efficacy of vitamin D supplementation.\n\nBODY.LIMITATIONS:\nThe limitations of our findings include short duration. Long-term interventions might result in better effects in other gene expression related to inflammation and oxidative stress. In addition, due to funding limitations in developing countries, we did not evaluate some gene expression related to insulin and lipid, and reactive oxygen species levels. Further investigations are required to evaluate this outcome. We did not know baseline levels of vitamin D among our participants, therefore, we were not able to comment on their commended dosage of vitamin D by the Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines. This limitation should be considered in the interpretation of our findings.\n\nBODY.AVAILABILITY OF DATA AND MATERIALS:\nThe primary data for this study is available from the authors on direct request.\n\nBODY.AUTHOR CONTRIBUTIONS:\nZA contributed in conception, data collection and manuscript drafting; HH, ES, HN, AS, MS, FK, and MT contributed in conception, data collection and manuscript drafting; All authors read and approved the final version of the paper.\n\nBODY.AUTHOR CONTRIBUTIONS.CONFLICT OF INTEREST STATEMENT:\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n**Question:** Compared to placebo (n = 30) every 2 weeks for 12 weeks what was the result of vitamin D supplements at a dosage of 50,000 IU (n = 30) every 2 weeks for 12 weeks on gene expression of interleukin (IL), tumor necrosis factor alpha (TNF-?), interferon gamma (IFN-?) ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
890
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized Trial Comparing Esomeprazole and Rabeprazole in First-line Eradication Therapy for \n\n ABSTRACT.OBJECTIVE :\nCYP2C19 metabolic activity influences the efficacy of Helicobacter pylori eradication therapies comprising PPIs. Rabeprazole (RPZ) and esomeprazole (EPZ) are PPIs not extensively metabolized by CYP2C19. The aim of this study was to elucidate whether or not first-line triple therapies using RPZ or EPZ are equally effective in Japanese patients with different CYP2C19 genotypes. \n\nABSTRACT.METHODS :\nTwo-hundred patients infected with H. pylori were randomized to receive one of the following regimens: amoxicillin (750 mg), clarithromycin (200 mg), and either esomeprazole (20 mg) (EAC group) or rabeprazole (10 mg) (RAC group), twice a day for one week. The CYP2C19 polymorphisms were determined by polymerase chain reaction and the serum level of pepsinogens was measured. \n\nABSTRACT.RESULTS :\nThe eradication rates of the EAC and RAC regimens were 79.8% (95% confidential interval: 71.7-89.0%) and 74.7% (66.0-83.4%), respectively, in a per protocol (PP) analysis (p=0.488). The eradication rates of the EAC and RAC regimens were not significantly different between patients with the homo EM genotype (p=0.999) or hetero IM or PM genotypes (p=0.286). A lower PG I/II ratio was associated with lower eradication rates (p=0.025). \n\nABSTRACT.CONCLUSION :\nAlthough the eradication rate was less than 80%, the EAC and RAC regimens were equally effective in each CYP2C19 genotype group. The PG I/II ratio was associated with the results of EAC and RAC therapy in this series of patients. \n\nBODY.INTRODUCTION:\n\nHelicobacter pylori infection has been associated with gastro-duodenal and systematic diseases, and the eradication of H. pylori infection has been shown to reduce the risk of gastric cancer (1, 2). In 2014, the International Agency for Research on Cancer suggested that the eradication of H. pylori should be considered as a strategy for preventing gastric cancer (3). The Japanese health insurance system approved efforts to eradicate H. pylori for all infected patients to eliminate gastric cancer in 2013 (4), and the number of patients receiving eradication therapy is on the rise. Since the approval of H. pylori eradication in November 2000, first-line eradication therapy has consisted of one week of triple therapy with a proton pump inhibitor (PPI), amoxicillin (AMOX) and clarithromycin (CLA) (5). In this therapy, the role of the PPI is to achieve a high intragastric pH for acid-labile antibiotics. PPIs are known to stabilize AMOX in the gastric mucus and the eradication rates of regimens containing AMOX are affected by the dose of PPI (6). The effect of PPIs such as omeprazole (OPZ) or lansoprazole (LPZ) are affected by the metabolic activity of the hepatic enzyme, cytochrome P450 (CYP) 2C19 (7). Indeed, some previous reports have shown lower eradication rates of triple therapy with OPZ or LPZ in patients with wild-type CYP2C19 compared to that in patients with a mutated genotype (8, 9). In contrast, later PPIs such as rabeprazole (RPZ) and esomeprazole (EPZ) have less CYP2C19-mediated metabolism than OPZ or LPZ (10, 11), and a meta-analysis showed that EPZ and RPZ had higher eradication rates than LPZ or OPZ in the treatment of H. pylori infection (12). However, in Japan, the efficacy of triple therapy with RPZ or EPZ in association with CYP2C19 genetic polymorphism has never been compared in a prospective, randomized trial. In the present study, we conducted a prospective, open, randomized multi-center trial to elucidate whether or not triple therapies using RPZ or EPZ are equally effective in patients with different CYP2C19 genotypes. We also examined the association between several factors, including serum level of pepsinogens, and the results of triple therapy using EPZ or RPZ.\n\nBODY.MATERIALS AND METHODS.PATIENTS:\nPatients who received upper GI endoscopy at any of eight institutes in Aomori Prefecture were enrolled in the study from May through September 2014. The presence of H. pylori was confirmed with a rapid urease test, 13C-urea breath test (UBT), stool antigen test or serum or urine antibody. Patients who met any of the following criteria were excluded from the study: [1] age under 16 years or over 70 years; [2] received antibiotic and/or PPI treatment(s) within 2 weeks of study commencement; [3] active peptic ulcer(s); [4] a history of gastric surgery; [5] a history of H. pylori eradication therapy; or [6] suspected of having malignant disease(s). A total of 200 patients were registered and 1 patient in the RPZ group was excluded due to severe heart disease (Fig. 1). The characteristics of the patients who completed the study are shown in Table 1. All patients gave their written informed consent before receiving eradication therapy. Peripheral blood was obtained and leukocytes were used for the CYP2C19 genotype analysis. Genotyping procedures for identifying the CYP2C19 wild-type gene and two mutated alleles, CYP2C19m1 and CYP2C19m2, were performed by a PCR-restriction fragment length polymorphism (PCR-RFLP) method with allele-specific primers (13). Serum samples were stored at -20°C and the titer of IgG antibody to H. pylori was determined by an enzyme immunoassay (E-plate; Eiken, Tokyo, Japan) and the levels of pepsinogen (PG) I and PG II by a radioimmunoassay. Figure 1.Assignment of patients and subsequent study flow. The study enrolled 200 patients at 8 medical institutes in Aomori Prefecture. The patients were randomly assigned to either the EAC group (n=99) or the RAC group (n=100). Table 1. Characteristics of the Patients who Completed the Regimen. EAC RAC Total p value Number 94 95 189 Sex Men 43 (45.7) 47 (49.5) 90 (47.6) 0.663 Women 51 (54.3) 48 (50.5) 99 (52.4) Age (years) <40 8 ( 8.5) 7 ( 7.4) 15 ( 7.9) 0.800 ≥40 86 (91.5) 88 (92.6) 174 (92.1) Smoking Non 73 (77.7) 75 (78.9) 148 (78.3) 0.861 Current 21 (22.3) 20 (21.1) 41 (21.7) Drinking Non 43 (45.7) 46 (48.4) 89 (47.1) 0.771 Current 51 (54.3) 49 (51.6) 100 (52.9) PG I (ng/mL) <49.0 23 (24.2) 26 (27.7) 49 (25.9) 0.753 49.0≤<69.0 24 (25.3) 25 (26.6) 49 (25.9) 69.0≤<89.0 17 (17.9) 19 (20.2) 36 (19) 89.0< 31 (32.6) 24 (25.5) 55 (29.2) PG II (ng/mL) <15.0 17 (17.9) 26 (27.7) 43 (22.8) 0.270 15.0≤<23.0 29 (30.5) 30 (31.9) 59 (31.2) 23.0≤<31.0 19 (20.0) 18 (19.1) 37 (19.6) 31.0< 30 (31.6) 20 (21.3) 50 (26.4) PG I/II ratio <2.2 25 (26.3) 19 (20.2) 44 (23.3) 0.370 2.2≤<2.9 25 (26.3) 19 (20.2) 44 (23.3) 2.9≤<3.6 16 (16.8) 23 (24.5) 39 (20.6) 3.6< 29 (30.6) 33 (35.1) 62 (32.8) CYP2C19 homo EM 30 (31.9) 30 (31.6) 60 (31.7) hetero IM/PM 64 (68.1) 65 (68.4) 129 (68.3) 0.999 anti-Hp antibody <24 34 (36.2) 28 (29.5) 62 (32.8) 0.514 (U/mL) 24≤<46 29 (30.9) 36 (37.9) 65 (34.4) ≥46 31 (33.0) 31 (32.6) 62 (32.8) N, (%) \n\nBODY.MATERIALS AND METHODS.ERADICATION THERAPY:\nPatients were randomly assigned to one of the following eradication therapy groups: [1] EAC group: EPZ (20 mg) twice a day (bid), AMPC (750 mg) bid, and CLA (200 mg) bid for one week; or [2] RAC group: RPZ (10 mg) bid, AMPC (750 mg) bid, and CLA (200 mg) bid for one week. After completing the treatment, any PPIs and other anti-ulcer agents that might affect the viability of H. pylori or urease activity were not used. Eradication assessment using a UBT (14) or stool antigen test (15) was performed 6 (±2) weeks after the completion of eradication therapy. One patient in the RAC group could not complete the regimen due to severe itching. During the treatment, eruption was observed in three patients, loose stool was recorded in two patients, and one patient complained of a metallic taste. However, these patients completed their regimens. Five patients in the EAC group and four patients in the RAC group did not visit the hospital and dropped out.\n\nBODY.MATERIALS AND METHODS.ETHICS ASSIGNMENT:\nThe unit of randomization was the individual, and the allocation sequence was generated by an unaffiliated person at the Hirosaki University Hospital who was blinded to the details of the study. This study was approved by Hirosaki University Medical Ethics Committee and registered to UMIN Clinical Trials Registry (UMIN000014366).\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nDifferences between the groups were compared using the χ2 test. A p value of <0.05 was considered significant.\n\nBODY.RESULTS:\nThe eradication rates of the EAC and RAC regimen groups were 79.8% [95% confidential interval (CI): 71.7-89.0%] and 74.7% (66.0-83.4%) for the per protocol (PP) analysis and 75.8% (67.4-84.2%) and 71.0% (62.1-79.9%) for the intention-to-treat (ITT) analysis, respectively (Fig. 2A). No significant differences were observed between the two regimens. Regarding the CYP2C19 genetic polymorphism status, 62 patients were assigned to the homozygous extensive metabolizer (homo EM) group, 102 patients with 1 mutation were assigned to the heterozygous intermediate metabolizer group (hetero IM), and 35 patients with 2 mutations were assigned to the poor metabolizer (PM) group. Fig. 2B shows the PP-based eradication rates in relation to the CYP2C19 polymorphism. The eradication rate of the EAC and RAC regimens in the homo EM patients was 73.3% (CI: 57.5-89.1%) and 76.7% (CI: 61.6-91.8%), respectively. In the hetero IM and PM patients, the eradication rate of the EAC regimen was 82.8% (73.6-92.0%), and that of the RAC regimen was 73.8% (63.1-84.5%). The eradication rates of the EAC and RAC regimen groups were not significantly different between the patients with homo EM genotype and those with hetero IM or PM genotype. The eradication rates of homo EM patients were not markedly different from those of the hetero IM or PM patients in the EAC (p=0.287) or RAC (p=0.999) regimen groups. Figure 2.(A) The eradication rates of EAC and RAC therapy by a per-protocol (PP) analysis or intention-to-treat analysis. (B) The eradication rates of EAC and RAC therapy by a PP analysis in patients with homo EM versus those with hetero IM or PM. Table 2 shows the association between the patients' characteristics and the results of eradication therapy in 189 patients who completed the protocol. No factors were associated with the results of eradication therapy. Table 3 shows the association between the serum pepsinogen levels and the results of eradication therapy. Although the levels of PG I and II were not significantly associated with the treatment results, the eradication rates tended to increase with an increasing PG I/II ratio (p=0.025). The eradication rate was 68.2% in the 44 patients with a PG I/II ratio lower than 2.2 and 84.2% in those with a PG I/II ratio higher than 2.9. Table 2. Patients’ Characteristics and the Results of Eradication Therapy. Eradication p value failure success Sex Men 19 (21.1) 71 (78.9) 0.729 Women 24 (24.2) 75 (75.8) Age (years) <40 4 (26.7) 11 (73.3) 0.749 ≥40 39 (22.4) 135 (77.6) Smoking Non 34 (23) 114 (77) 0.999 Current 9 (22) 32 (78) Drinking Non 17 (19.1) 72 (80.9) 0.299 Current 26 (26) 74 (74) CYP2C19 homo EM 15 (25) 45 (75) 0.710 Hetero IM+PM 28 (21.7) 101 (78.3) anti-Hp antibody <24 14 (22.6) 48 (77.4) 0.997 (U/mL) 24≤<46 15 (23.1) 50 (76.9) ≥46 14 (22.6) 48 (77.4) N, (%) Table 3. Level of Pepsinogens and the Results of Eradication Therapy. eradication p value success (N) failure (N) rate (%) PG I (ng/mL) <49.0 35 14 71.4 0.177 49.0≤<69.0 37 12 75.5 69.0≤<89.0 29 7 80.6 89.0< 45 10 81.8 PG II (ng/mL) <15.0 35 8 81.4 0.573 15.0≤<23.0 46 13 78.0 23.0≤<31.0 26 11 70.3 31.0< 39 11 78.0 PG I/II ratio <2.2 30 14 68.2 0.025 2.2≤<2.9 31 13 70.5 2.9≤<3.6 33 6 84.6 3.6< 52 10 83.9 \n\nBODY.DISCUSSION:\nEPZ has been the most popular PPI worldwide for more than 10 years. However, in Japan, the use of EPZ was only just approved in September 2011, and thus relatively few studies have examined the efficacy of EPZ in H. pylori eradication therapy compared with other PPIs in a Japanese population. The present study is the first prospective randomized study to compare the efficacy of EPZ and RPZ in first-line eradication therapy for H. pylori infection among Japanese patients with different CYP2C19 genotypes. Suppression of gastric acid is important for the eradication therapy of H. pylori infection (6, 16). Better eradication rates have been observed in PM patients undergoing PPI-based triple therapies using OPZ or LPZ than in EM patients (7, 8). Similarly, a recent meta-analysis showed that CYP2C19 loss-of-function variants are associated with increased H. pylori eradication rates in patients receiving these therapies (17). In this study, the eradication rates in EM patients were not significantly different from those in IM and PM patients receiving the EAC and RAC regimens. Therefore, the genotyping of CYP2C19 is not necessary for first-line eradication therapy when EPZ or RPZ are used with AMOX and CLA. However, even the highest eradication rate, which was observed in the IM and PM patients treated with EAC regimen, remained 82.8%. Generally, regimens with >90% eradication rate are recognized as effective (18). The results of this study indicate that 7-days of EAC and RAC regimens both had an insufficient effect without testing susceptibility to antibiotics. Indeed, the European guideline suggests that EAC and RAC regimens should last 10-14 days and PPIs should be administered at high dose (19). Furthermore, EAC and RAC regimens containing higher doses of EPZ or RPZ had higher eradication rates (20). To use EAC and RAC regimens as the first-line therapy, a longer duration and higher dose of EPZ or RPZ as well as testing for CLA resistance should be approved by the Japanese health insurance system. The most common cause of failure of first-line therapy is bacterial resistance to CLA (3). Determining CLA resistance is important for predicting the results of triple therapy. However, bacterial culture is not always available. Therefore, it would be useful to determine other factors associated with the results of first-line therapy using RPZ or EPZ. Recently, ABC classification for the risk of gastric cancer has been introduced by many health surveys (21). In this classification, patients are tested serum antibody to H. pylori with the serum levels of PGs. Therefore, information about the serum level of PGs is available for these patients before starting eradication therapy. The results of the present study showed that a lower PG I/II ratio was associated with failure of EAC or RAC therapy. Generally, low levels of PG I reflect mucosal atrophy of the gastric corpus, while higher levels of PG II reflect more severe gastric mucosal inflammation. The PG I/II ratio increases after successful eradication (22) and is usually high in patients not infected by H. pylori (23). Therefore, the present results suggest that failure of H. pylori eradication by EAC or RAC therapy might be more frequent in stomachs with advanced mucosal atrophy and persistent inflammation. However, a previous study showed that a low pepsinogen I/II ratio was associated with successful eradication in a different eradication regimen (24). Further studies are required to determine whether or not the serum level of PGs is associated with the results of eradication therapy and to elucidate whether or not alternative therapy is required for patients who have a lower level of PG I/II. The limitation of this study is the lack of information about bacterial susceptibility to antibiotics. As already mentioned, resistance to CLA is the most common cause for the failure of first-line therapy (5). However, bacterial culture is not always successful, and it takes more than 10 days to obtain the results of susceptibility to antibiotics in most medical institutions. Furthermore, at present, the tests for neither the susceptibility of H. pylori to antibiotics nor the detection of the point mutation of bacterial 23S rRNA gene are covered by the Japanese health insurance system. For many patients, the extra cost and longer duration before starting treatment are not acceptable. In addition, AMOX-resistant H. pylori is very rare in Japan (5). We therefore did not examine bacterial susceptibility to antibiotics in this study. Since this study is a prospective randomized trial, the influence of bacterial susceptibility on the differential eradication rates between EAC and RAC therapies would be minimal. However, we cannot exclude the possibility that a higher CLA resistance rate caused the lower eradication rate in patients with a lower PG I/II ratio. In conclusion, both the EAC and RAC regimens were safe and similarly effective as first-line eradication therapies for H. pylori infection, regardless of CYP2C19 genotype. However, the eradication rates of both therapies were insufficient under the current approval of the Japanese health insurance system. The PG I/II ratio seems to be associated with the results of EAC or RAC therapy.\n\nBODY:\nThe authors state that they have no Conflict of Interest (COI).\n\n**Question:** Compared to amoxicillin (750 mg), clarithromycin (200 mg), and rabeprazole (10 mg) (RAC group), twice a day for one week. what was the result of amoxicillin (750 mg), clarithromycin (200 mg), and either esomeprazole (20 mg) (EAC group), twice a day for one week. on H. pylori eradication?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
1026
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomised controlled trial to prevent smoking relapse among recently quit smokers enrolled in employer and health plan sponsored quitlines\n\n ABSTRACT.OBJECTIVE:\nTo test adding an interactive voice response (IVR)-supported protocol to standard quitline treatment to prevent relapse among recently quit smokers.\n\nABSTRACT.DESIGN:\nParallel randomised controlled trial with three arms: standard quitline, standard plus technology enhanced quitline with 10 risk assessments (TEQ-10), standard plus 20 TEQ assessments (TEQ-20).\n\nABSTRACT.SETTING:\nQuit For Life (QFL) programme.\n\nABSTRACT.PARTICIPANTS:\n1785 QFL enrolees through 19 employers or health plans who were 24+ h quit.\n\nABSTRACT.INTERVENTIONS:\nQFL is a 5-call telephone-based cessation programme including medications and web-based support. TEQ interventions included 10 or 20 IVR-delivered relapse risk assessments over 8 weeks with automated transfer to counselling for those at risk.\n\nABSTRACT.MAIN OUTCOME MEASURES:\nSelf-reported 7-day and 30-day abstinence assessed at 6-month and 12-month post-enrolment (response rates: 61% and 59%, respectively). Missing data were imputed.\n\nABSTRACT.RESULTS:\n1785 were randomised (standard n=592, TEQ-10 n=602, TEQ-20 n=591). Multiple imputation-derived, intent-to-treat 30-day quit rates (95% CI) at 6 months were 59.4% (53.7% to 63.8%) for standard, 62.3% (57.7% to 66.9%) for TEQ-10, 59.4% (53.7% to 65.1%) for TEQ-20 and 30-day quit rates at 12 months were 61.2% (55.6% to 66.8%) for standard, 60.6% (56.0% to 65.2%) for TEQ-10, 54.9% (49.0% to 60.9%) for TEQ-20. There were no significant differences in quit rates. 73.3% of TEQ participants were identified as at-risk by IVR assessments; on average, participants completed 0.41 IVR-transferred counselling calls. Positive risk assessments identified participants less likely (OR=0.56, 95% CI 0.42 to 0.76) to be abstinent at 6 months.\n\nABSTRACT.CONCLUSIONS:\nStandard treatment was highly effective, with 61% remaining abstinent at 12 months using multiple imputation intent-to-treat (intent-to-treat missing=smoking quit rate: 38%). TEQ assessments identified quitters at risk for relapse. However, adding IVR-transferred counselling did not yield higher quit rates. Research is needed to determine if alternative designs can improve outcomes.\n\nABSTRACT.TRIAL REGISTRATION NUMBER:\nNCT00888992.\n\nBODY:\nStrengths and limitations of this studyFirst large randomised controlled trial to examine the use of interactive voice response (IVR) technology for prevention of smoking relapse.Large sample of participants recruited from employer and health plan quitline programmes serving tobacco users across the USA.Protocol used an innovative and potentially cost-effective technology to connect those at risk for relapse to immediate counselling.We compared two risk assessment intensities over the first 8 weeks following successful abstinence.Although participants rated the IVR system positively, fewer accepted immediate risk-focused counselling than anticipated; thus, the increase in treatment exposure was small. Smoking continues to be the leading preventable cause of premature death and disease in the USA and worldwide.1\n2 Quitting tobacco is one of the most important steps a tobacco user can take to improve their health.3 Free publicly (state and government) and privately (eg, employer and health plan) funded telephone-based tobacco cessation quitlines are available throughout the USA and increasingly around the world to help tobacco users quit. With the advent of the Affordable Care Act in the USA, telephone-based tobacco cessation services offer an important resource to ensure that tobacco users have access to the empirically supported tobacco treatments that are required components of coverage. Although extensive evidence exists regarding the efficacy and cost-effectiveness of tobacco cessation quitlines,4–6 as with other behavioural and pharmacological tobacco treatment options, little is known about how best to intervene with callers at the time of greatest risk for relapse. A recent Cochrane Review concluded that previously tested behavioural relapse prevention interventions, which primarily included skills training or extended therapeutic contact, did not provide additional benefit over no treatment or intervention without a relapse prevention component.7 Survival studies have identified that the highest relapse risk, on average, is in the first 2 weeks after a quit attempt; however, there is considerable between-individual and within-individual variability in risk-related symptom patterns.8 Relapse prevention interventions may be most effective if provided at the time of relapse risk given the multifaceted and individualised nature of withdrawal and environmentally induced relapse risks over time.8 Interactive voice response (IVR) technology offers a resource-efficient solution for connecting tobacco users with relapse prevention interventions at the time of need, especially those already enrolled in phone-based tobacco cessation treatment. IVR is increasingly incorporated into some tobacco treatment protocols,9\n10 and has been used in chronic disease management, posthospital discharge management, medication compliance monitoring, and in a small number of addiction and depression self-help interventions.11 IVR has been piloted as a smoking relapse prevention tool following a single group counselling call and 12 weeks of varenicline,12 but was insufficiently powered to draw meaningful conclusions about the effectiveness of the nine-call and nurse call-back intervention. The purpose of this present study is to examine the effectiveness of augmenting standard multisession tobacco quitline treatment with repeated IVR assessments to monitor recent quitters' relapse risk, and transfer those at risk to a Quit Coach for a brief, focused, intervention targeting the identified relapse trigger. We hypothesised that smokers receiving IVR risk assessments during the first 2 months after quitting would be more likely to stay quit than those receiving standard quitline treatment, and that more frequent assessments (20 vs 10) would be more beneficial.\n\nBODY.METHODS.STUDY DESIGN:\nIn this parallel randomised controlled trial, tobacco users enrolled in the Quit For Life (QFL) programme who achieved abstinence for at least 24 h after their quit date were randomised into one of three study groups: (1) standard treatment, (2) Technology Enhanced Quitline-10 (TEQ-10): standard treatment plus 10 IVR-delivered relapse risk assessments which triggered a transfer to a Quit Coach for participants exceeding risk thresholds and (3) Technology Enhanced Quitline-20 (TEQ-20): same as the TEQ-10 condition except with 20 relapse risk assessments rather than 10. Participants were randomised using blocked randomisation to ensure similar group sizes throughout the study; every nine participants within each employer or health plan contract were randomly assigned with an allocation ratio for the three groups of 3:3:3. These random allocation tables were generated by the study management team and integrated into the QFL computer system where they were not accessible to coaches or participants. Once study eligibility was confirmed, the Quit Coach clicked a 'Randomise' button and the system automatically assigned the participant to the group listed in the next available cell of the allocation table. The planned and final sample size was 1785 and was determined a priori to achieve at least 80% power to detect 30-day point prevalence quit rate differences of 8% between any pair of three arms for intent-to-treat (ITT) analyses, including the traditional penalised imputation approach in which non-responders to data collection are considered relapsed13 and the state-of-the-art multiple imputation method14 (assuming quit rates of 29.6%, 37.6% and 45.6% at 6 months and 26.6%, 34.6% and 42.6% at 12 months for standard care, TEQ-10 and TEQ-20, respectively; n=595) and of 10% for a responder analysis between standard care and either intervention at 6 or 12 months (assuming quit rates of 52.1% standard care, 62.1% TEQ-10, 62.1% TEQ-20; assuming 25% attrition, with 446 responders per arm). The primary analysis using ITT multiple imputation has slightly greater than 80% power because, like ITT using penalised imputation (missing=smoking), it uses all persons with baseline data. However, compared with penalised imputation, or responder-only analysis, multiple imputation ITT uses more information and produces results with greater power, less bias, and greater accuracy.13\n14 Participants gave informed consent prior to study enrolment and received a US$25 gift card for completed 6-month and 12-month follow-up surveys (potential for US$50 total).\n\nBODY.METHODS.PARTICIPANTS:\nTobacco users were eligible for the study if they had enrolled in the QFL programme between April 2010 and October 2012 through 1 of 19 employers or health plans who offered QFL as part of their benefits package. Because the purpose of this study was to examine the potential benefit of IVR assessments, and their triggering of counselling at the point of need to reduce relapse, tobacco users were screened for study criteria and invited to participate after reporting they had been quit for 24 h or more after their quit date. This typically occurred during their second or third call (their quit date call or the call after) of the five-call programme. Additional eligibility criteria included being English-speaking, 18 years of age or older, and having access to a touch-tone phone. Tobacco users were excluded if they exclusively used smokeless tobacco, were actively participating in another tobacco cessation programme, had previously enrolled in the QFL programme during the past 6 months, or had limited phone access (eg, incarceration). Of the 3723 tobacco users who received the study offer, 1840 (49.4%) agreed to participate and 1785 completed the baseline survey and were randomised to one of the three study groups (figure 1). Figure 1CONSORT Diagram. *36 participants did not receive all of their relapse risk assessments due to a programming error in the interactive voice response (IVR) code.\n\nBODY.METHODS.THE QFL PROGRAMME:\nQFL, operated by Alere Wellbeing (formerly Free & Clear), has been commercially available for nearly 30 years, and its effectiveness and cost-effectiveness has been demonstrated in various studies.5\n15–19 The QFL programme includes five counselling calls (an initial assessment and planning call plus four additional proactive outbound calls from an expert Quit Coach). Quit Coaches are required to have a bachelor's degree in a related field, complete more than 200 h of training, and receive ongoing supervision and feedback. The call schedule and content is tailored to each participant's needs. The default call schedule includes the initial planning and assessment call, a quit date call (QDC), a quit date follow-up call 7 days after the QDC, and two additional follow-up calls that take place approximately every 3 weeks after the previous call. Participants are also encouraged to call in for support as needed (ie, participant-initiated inbound calls). Calls vary in length: the planning and assessment call is 25–30 min, while other calls are 15–20 min. Calls focus on creating a quit plan, developing problem-solving and coping skills, identifying social support, and using cessation medications to achieve and maintain abstinence. Participants may receive over-the-counter nicotine replacement products directly via mail (or other prescription cessation medications depending upon their cessation benefit with their employer or health plan). They also receive a printed quit guide, and access to Web Coach, an online interactive web-based programme designed to complement the phone programme.\n\nBODY.METHODS.IVR INTERVENTION:\nIn the two TEQ groups, participants were contacted for relapse risk assessments through automated IVR calls over their first 8 weeks postquit. Two intensities of IVR monitoring were examined. TEQ-10 participants were contacted twice weekly for the first 2 weeks, then weekly for 6 weeks. TEQ-20 participants were contacted daily for the first 2 weeks, then weekly for 6 weeks. An IVR service contractor programmed and delivered the risk assessments (approximately 5 min), which included questions to identify relapse risk on five factors: lapses, cravings, negative affect, self-efficacy and motivation to remain quit. An algorithm was used to flag participants as 'at risk' if they answered any of the screening questions over an established threshold, based on previous research.20–22 Participants who exceeded the threshold were then transferred directly to a Quit Coach for a brief intervention (approximately 15 min) specifically addressing the risk factor(s) that triggered their transfer. Intervention fidelity was monitored by study staff and investigators. Randomly sampled recordings of IVR-triggered telephone counselling sessions were rated for compliance with the study protocol (the primary criterion being whether the counsellor focused the call on identified relapse risk(s)) by two independent reviewers (a study staff member and a Quit Coach supervisor). Reviews revealed some Quit Coaches did not receive information about the risk factor that triggered a participant's transfer due to technological issues. These programming errors were corrected by the IVR vendor. In these circumstances, participants still received empirically based tobacco cessation support at the time of need. As part of quality monitoring, any protocol deviations were addressed with individual feedback from Quit Coach supervisors and ongoing training for all Quit Coaches.\n\nBODY.MEASURES.BASELINE MEASURES:\nDuring registration and the first Quit Coach call, participants completed standard assessment questions regarding demographic characteristics (age, gender, health comorbidities) and tobacco use (tobacco type, time to first use, cigarettes per day, number of previous quit attempts, social contact with smokers). After study recruitment, participants completed an additional baseline measure assessing age, race, ethnicity, education, depressive symptoms (PHQ-2; 2-item measure21), and stress (Perceived Stress Scale; 4-item measure23).\n\nBODY.MEASURES.IVR-POSITIVE SCREENS AND PROGRAM CALL COMPLETION:\nThe IVR system stored the number of relapse risk assessments participants answered, participants' responses to the relapse risk questions, and the number of IVR assessments on which a participant had a positive risk screen. Counselling calls completed as a result of a positive IVR screen and successful transfer to a Quit Coach (ie, IVR-transferred counselling calls) were tracked in addition to standard programme calls (5 planned calls plus participant-initiated inbound calls).\n\nBODY.MEASURES.OUTCOMES MEASURED AT 6 AND 12 MONTHS:\nAt 6 and 12 months after randomisation, participants completed telephone interviewsi administered by outcomes evaluators from the Indiana University Center for Survey Research who were blind to study condition to measure abstinence. All participants rated their satisfaction with the QFL service, and those in the TEQ groups were asked their opinions of the IVR system during the 6-month survey. When participants could not be reached after repeated phone attempts, we also mailed a brief paper survey with only the smoking abstinence questions. Survey response rates were 61% and 59% at 6 and 12 months, respectively. The primary study outcomes were quit rates at 6 and 12 months, defined as 7-day and 30-day self-reported point prevalence abstinence measured by the question, 'In the last [7/30] days, have you smoked a cigarette, even a puff?'. Time to relapse was examined as a secondary outcome and was identified using three questions: 'Since [end of two-week grace period], have you ever smoked at least a part of a cigarette on 7 days in a row?'; 'Since [end of two-week grace period], have you ever smoked any in each of two consecutive weeks (at least 1 day for 2 weeks in a row)?', and [if yes to either of the 2 previous questions] 'What was the first day of that 7-day or two-week period that you smoked?'.16\n\nBODY.MEASURES.STATISTICAL ANALYSES:\nMissing values on all 6-month and 12-month outcomes were imputed using the state-of-the-art imputation method called 'multiple imputation'24 with sequential regression.25 Multiple imputation allows all cases (N=1785) to be included in the analysis. Two imputation models were performed: one for 7-day and one for 30-day point prevalence abstinence. With this approach, variables with no missing values served as independent variables. The variable with the least amount of missing data was imputed first, and used as the dependent variable. Once imputation was carried out, the next variable with the least amount of missing data was imputed using all variables with non-missing values as independent variables including the variable imputed on the previous step. This cycle continued until all missing values were imputed resulting in a series of regressions, hence the name sequential regression. Variables used to provide information for imputation (and to be imputed themselves if missing) included intervention group (no data missing), baseline characteristics listed in table 1 (with the exception of years of tobacco use and stress, due to high correlations with age and depression, respectively), and number of planned calls plus participant-initiated inbound counselling calls completed. These variables had a minimal amount of missing data with, at most, 3.5% missing for the motivation variable. Abstinence outcomes measured at both 6 and 12 months were included in each imputation model and had the greatest amount of missing data. Ten multiple-imputed data sets were generated and analysed. Table 1 Baseline characteristics (N=1785) Variable Overall (N=1785) Standard (N=592) Standard plus TEQ-10 (N=602) Standard plus TEQ-20 (N=591) Demographics Age, mean (SD) 43.4 (11.9) 43.3 (12.2) 44.0 (11.5) 43.0 (12.0) Gender, N (%) Female 967 (54.2) 321 (54.2) 326 (54.1) 320 (54.2) Male 818 (45.8) 271 (45.8) 276 (45.9) 271 (45.8) Race, N (%) Caucasian 1489 (83.4) 497 (84.0) 501 (83.2) 491 (83.1) African-American 204 (11.4) 63 (10.6) 71 (11.8) 70 (11.8) Other 51 (2.9) 16 (2.7) 19 (3.2) 16 (2.7) Missing 43 (2.3) 16 (2.7) 11 (1.8) 14 (2.4) Education, N (%) High school or less 667 (37.4) 229 (38.7) 225 (37.4) 213 (36.0) More than high school 1117 (62.6) 363 (61.3) 376 (62.6) 378 (64.0) Comorbidities, N (%) Asthma 176 (9.9) 62 (10.5) 61 (10.1) 53 (9.0) COPD 108 (6.0) 39 (6.6) 32 (5.3) 37 (6.3) CAD 75 (4.2) 27 (4.6) 25 (4.2) 23 (3.9) Diabetes 138 (7.7) 53 (8.9) 46 (7.6) 39 (6.6) Any chronic comorbidity* 395 (22.1) 136 (23.0) 136 (22.6) 123 (20.8) Tobacco use Time to first cigarette, N (%), min 5 539 (30.2) 165 (27.9) 197 (32.7) 177 (30.0) 6–30 618 (34.6) 213 (36.0) 201 (33.4) 204 (34.5) 31–60 285 (16.0) 89 (15.0) 101 (16.8) 95 (16.1) >60 289 (16.2) 105 (17.7) 88 (14.6) 96 (16.2) Missing 54 (3.0) 20 (3.4) 15 (2.5) 19 (3.2) Cigarettes per day, mean (SD) 16.8 (9.0) 16.8 (9.1) 17.2 (8.9) 16.4 (9.1) Number of quit attempts, N (%) 0–1 377 (21.1) 116 (19.6) 127 (21.1) 134 (22.7) 2–5 1033 (57.9) 371 (62.7) 335 (55.7) 327 (55.3) 6+ 347 (19.4) 97 (16.4) 127 (21.3) 122 (20.6) Missing 28 (1.6) 8 (1.4) 12 (2.0) 8 (1.4) Tobacco use ≥20 years, N (%) Yes 1098 (61.5) 361 (61.0) 383 (63.6) 354 (59.9) No 671 (37.6) 227 (38.3) 211 (35.1) 233 (39.4) Missing 16 (0.9) 4 (0.7) 8 (1.3) 4 (0.7) Social contact with smokers, N (%) No 470 (26.3) 146 (24.7) 179 (29.7) 145 (24.5) Work 572 (32.0) 192 (32.4) 188 (31.2) 192 (32.5) Home 342 (19.2) 122 (20.6) 104 (17.3) 116 (19.6) Both work and home 344 (19.3) 112 (18.9) 111 (18.4) 121 (20.5) Missing 57 (3.2) 20 (3.4) 20 (3.3) 17 (2.9) Scale scores Motivation to quit, mean (SD) 9.1 (1.3) 9.0 (1.3) 9.0 (1.2) 9.1 (1.2) Depression, mean (SD) 1.0 (1.3) 1.0 (1.3) 1.0 (1.3) 1.0 (1.3) Perceived stress, mean (SD) 4.3 (2.8) 4.3 (2.9) 4.2 (2.8) 4.5 (2.8) *Defined as having asthma or COPD or CAD or diabetes. CAD, coronary artery disease; COPD, chronic obstructive pulmonary disease; TEQ-10, Technology Enhanced Quitline-10; TEQ-20, Technology Enhanced Quitline-20. Logistic regression models were used to model intervention efficacy on the 6-month and 12-month outcomes of abstinence, while adjusting for theoretically important potentially confounding baseline covariates.26 There were four main outcomes (7-day and 30-day point prevalence quit rates assessed at 6 and 12 months) and each was analysed with a separate logistic regression model. The multiple imputation quit rate results were our primary outcome metric. However, outcomes were reported using three methods: (1) for all survey respondents (responder analysis), (2) using penalised imputation reporting (assumes non-respondents are treatment failures) which is the traditional ITT analysis used in the smoking literature and (3) with results imputed for participants with missing data using the state-of-the-art multiple imputation method. Although ITT outcomes using penalised imputation are commonly reported in cessation trials, methodological research, including simulation studies, have shown that multiple imputation ITT uses more information and produces greater power, less bias, and more accuracy compared with simpler ad hoc methods for handling missing data such as traditional penalised imputation ITT, or responder-only analysis.13\n14 Missing values for relapse date were imputed using the procedures described above. Using this relapse date and the baseline date, proportional hazards regression was used to examine factors associated with time to relapse. The variables representing number of calls (table 2) were generally skewed to the right with a large number of zeroes. These data were complete and did not require imputation. To compare study groups on these count variables, a generalised linear model with log link function and negative binomial distribution was used. Six-month satisfaction data were also analysed (without imputation) to determine levels of satisfaction with the QFL programme and IVR protocol. The satisfaction variable was not imputed since it was not a primary outcome. Satisfaction comparisons between intervention groups were made using a χ2 test. Mplus software for Windows (V.7.2) was used to generate imputed data and PROC MIANALYZE from SAS/STAT software (V.9.3) was used to pool results and analyse the imputed data sets. All other statistical comparisons were performed using SAS/BASE and SAS/STAT software for Windows (V.9.3). Table 2 Descriptives for number of phone calls (N=1785) Variable Overall (N=1785) Standard (N=592) Standard plus TEQ-10 (N=602) Standard plus TEQ-20 (N=591) p Value* (1) Planned programme counselling calls (1–5), mean (SD); range 3.29 (1.09); 1–5 3.29 (1.09); 1–5 3.32 (1.09); 1–5 3.25 (1.08); 1–5 0.8085 (2) Participant-initiated inbound calls, mean (SD); range 0.38 (1.05); 0–14 0.37 (0.92); 0–8 0.40 (1.13); 0–14 0.37 (1.08); 0–12 0.9082 ≥1 vs 0, N (%) 284 (15.9) 99 (16.7) 96 (16.0) 89 (15.1) 0.7361† (1+2) Sum of planned and participant-initiated inbound calls, mean (SD); range 3.67 (1.53); 2–17 3.66 (1.48); 2–13 3.72 (1.62); 2–17 3.62 (1.48); 2–13 0.6851 (3) IVR-transferred counselling calls, mean (SD); range 0.41 (0.75); 0–6 N/A 0.39 (0.72); 0–6 0.43 (0.79); 0–6 0.2791‡ ≥1 vs 0, N (%) 361 (30.3) N/A 175 (29.1) 186 (31.5) 0.3665†‡ (1+2+3) Sum of all calls, mean (SD); range 3.94 (1.71); 2–19 3.66 (1.48); 2–13 4.11 (1.82); 2–19 4.06 (1.77); 2–17 <0.0001 *p Value obtained from generalised linear model for a count outcome except where indicated. †p Value obtained from χ 2 test. ‡p Value is for comparison between TEQ-10 and TEQ-20 groups. IVR, interactive voice response; TEQ-10, Technology Enhanced Quitline-10. \n\nBODY.RESULTS.PARTICIPANTS:\nBaseline characteristics across the three intervention groups are shown in table 1. Overall, the average age of study participants was 43 years (range 18–85). A little over half (54%) were women. A majority of the sample was Caucasian (83%) and reported having a high school education or more (62%). Just under a quarter of the sample reported having ≥1 chronic comorbidity (22%). The average number of reported cigarettes used per day was 17 (range 0–60) with about two-thirds of the sample reporting having had a cigarette within 30 min from the time they wake in the morning. About 20% of the sample reported 0 or one quit attempts, with a similar percentage reporting six or more quit attempts. A little over 60% reported using tobacco for 20 or more years, and about three-quarters reported having contact with smokers at work, home, or both. Overall, participants reported feeling motivated to quit smoking, with an average rating of 9 on a 1–10 scale (1=low, 10=high). The average PHQ-2 depression score was 1.0 (possible range 0–6, with high scores indicating higher levels of depression,21 and the average perceived stress score was 4.3 (possible range 0–16, with higher scores indicating higher perceived stress.23 At the time of study offer, 56.5% of participants had been quit for 1–6 days, 40% had been quit for 7–29 days, and 3.5% had been quit for 30 days or more.\n\nBODY.RESULTS.PROGRAMME ENGAGEMENT AND POSITIVE RISK SCREENS:\nProgramme engagement was measured as the number of counselling calls completed by participants:1 planned programme counselling calls,2 participant-initiated inbound calls, or3 IVR-transferred counselling calls (table 2). Overall, the average number of total calls completed was 3.9 (SD=1.7; range 2–19). There was a statistical difference between groups; the TEQ-10 and TEQ-20 groups completed slightly more calls than the standard study group due to the additional IVR-transferred counselling calls they received after a positive screen (p<0.001). There were no statistical differences between participant-initiated (p=0.91) or planned programme calls completed (p=0.81). Also, there was no statistical difference in IVR-transferred counselling calls between the TEQ-10 and TEQ-20 groups (p=0.28). There was a statistical difference in the number of positive screens between the two intervention groups, with TEQ-20 having slightly more positive screens than TEQ-10 (on average, 1.52 (SD=1.66) for TEQ-10 and 1.82 (SD=1.99) for TEQ-20, p=0.003). The percentage of one or more positive screens was also larger in the TEQ-20 group (76% vs 70%; p=0.03).\n\nBODY.RESULTS.SIX-MONTH AND 12-MONTH OUTCOMES:\nSeven-day and 30-day point prevalence abstinence rates at 6-month and 12-month follow-up are shown in table 3. As aforementioned, outcomes are reported using three methods: (1) for all survey respondents (responder analysis), (2) using a crude ITT analysis based on penalised imputation reporting (assumes non-respondents are treatment failures) and (3) using state-of-the-art ITT based on multiple imputation for participants with missing data. Since we had missing data, we used the multiple imputation quit rate as our primary outcome. Table 3 Estimates of abstinence at 6 and 12 months, N=1785 Intervention group 6-Month survey outcomes 12-Month survey outcomes N Did not smoke in last 7 days N Did not smoke in the last 30 days N Did not smoke in last 7 days N Did not smoke in the last 30 days Responders* Standard 368 66.0 (61.2 to 70.9) 368 60.6 (55.6 to 65.6) 363 65.3 (60.4 to 70.2) 362 61.6 (56.6 to 66.6) Standard plus TEQ-10 368 69.6 (64.9 to 74.3) 368 65.2 (60.4 to 70.1) 370 67.0 (62.2 to 71.8) 371 63.1 (58.2 to 68.0) Difference compared to standard 3.5 (−3.2 to 10.3) p=0.3051 4.6 (−2.4 to 11.6) p=0.1946 1.7 (−5.1 to 8.6) p=0.6191 1.5 (−5.5 to 8.5) p=0.6812 Standard plus TEQ-20 352 67.3 (62.4 to 72.2) 352 61.1 (56.0 to 66.2) 310 62.6 (57.2 to 68.0) 311 56.6 (51.1 to 62.1) Difference compared to standard 1.3 (−5.6 to 8.2) p=0.7121 0.5 (−6.6 to 7.6) p=0.8947 −2.7 (−10.0 to 4.6) p=0.4655 −5.0 (−12.5 to 2.4) p=0.1871 Intent-to-treat penalised imputation (missing=smoking)† Standard 592 41.1 (37.1 to 45.0) 592 37.6 (33.8 to 41.6) 592 40.0 (36.1 to 44.0) 592 37.7 (33.8 to 41.6) Standard plus TEQ-10 602 42.5 (38.6 to 46.5) 602 39.9 (36.0 to 43.8) 602 41.2 (37.3 to 45.1) 602 38.9 (35.0 to 42.8) Difference compared to standard 1.5 (−4.1 to 7.1) p=0.6047 2.2 (−3.3 to 7.7) p=0.4357 1.2 (−4.4 to 6.7) p=0.6827 1.2 (−4.3 to 6.7) p=0.6693 Standard plus TEQ-20 591 40.1 (36.2 to 44.1) 591 36.4 (32.5 to 40.3) 591 32.8 (29.0 to 36.6) 591 29.8 (26.1 to 33.5) Difference compared to standard −1.0 (−6.5 to 4.7) p=0.7405 −1.3 (−6.8 to 4.2) p=0.6459 −7.2 (−12.7 to −1.7) p=0.0100 −7.9 (−13.3 to −2.5) p=0.0041 Multiple imputation‡ Standard 592 64.2 (59.5 to 68.9) 592 59.4 (53.7 to 63.8) 592 63.3 (58.4 to 68.4) 592 61.2 (55.6 to 66.8) Standard plus TEQ-10 602 67.8 (63.0 to 72.7) 602 62.3 (57.7 to 66.9) 602 65.1 (60.3 to 69.9) 602 60.6 (56.0 to 65.2) Difference compared to standard 3.6 (−3.3 to 10.5) p=0.2980 3.6 (−3.2 to 10.4) p=0.2969 1.7 (−4.4 to 7.7) p=0.5831 −0.6 (−7.6 to 6.4) p=0.8647 Standard plus TEQ-20 591 64.7 (59.2 to 70.2) 591 59.4 (53.7 to 65.1) 591 60.9 (56.4 to 65.5) 591 54.9 (49.0 to 60.9) Difference compared to standard 0.5 (−7.0 to 8.0) p=0.8982 0.7 (−6.4 to 7.8) p=0.8468 −2.5 (−9.4 to 4.5) p=0.4808 −6.3 (−14.9 to 2.3) p=0.1471 Percentage of participants reporting abstinence is shown in table above with 95% CIs in parentheses. p Values obtained from χ 2 test for difference between proportions. *Estimates were obtained among participants having completed a 6-month or 12-month survey. †Estimates were obtained assuming participants missing a 6-month or 12-month survey were not abstinent. ‡Estimates were obtained by pooling results across 10 multiply imputed data sets. TEQ-10, Technology Enhanced Quitline-10. ITT-penalised imputation quit rates at 12 months were significantly lower for the TEQ-20 group compared with the standard and TEQ-10 groups. However, there were no significant differences in responder or imputed quit rates at 6 or 12 months, or rates of relapse. The quit rates based on multiple imputation at 6 and 12 months ranged from 55% to 68% depending on intervention group, follow-up time point, and outcome measure. Controlling for baseline characteristics, logistic regression models further illustrated no statistical difference between intervention groups for either primary outcome and either time point (table 4). A few baseline characteristics were statistically associated with the outcomes. Across the four models shown in table 4, a higher number of counselling calls, and being motivated to quit, were consistently associated with higher odds of being abstinent (all p<0.05). Depression also played a role in three of the models, with higher depression scores being associated with lower odds of being abstinent (all p<0.05). Older age resulted in higher odds of abstinence for 6-month outcomes only (both p<0.01). Having at least one comorbid chronic condition was associated with lower odds of abstinence in three of the models (all p<0.05). Table 4 ORs from adjusted logistic models of multiple imputation quit rates at 6 and 12 months, N=1785 6-Month survey outcomes 12-Month survey outcomes Variable Did not smoke in the last 7 days Did not smoke in the last 30 days Did not smoke in the last 7 days Did not smoke in the last 30 days Randomisation groups Standard plus TEQ-10 vs standard 1.15 (0.83 to 1.59) 1.14 (0.85 to 1.54) 1.07 (0.82 to 1.41) 0.96 (0.71 to 1.30) Standard plus TEQ-20 vs standard 1.00 (0.71 to 1.42) 1.01 (0.74 to 1.37) 0.87 (0.64 to 1.19) 0.75 (0.52 to 1.09) Number of programme counselling calls Planned+participant−initiated 1.10 (1.02 to 1.18)* 1.10 (1.02 to 1.19)* 1.12 (1.02 to 1.22)* 1.10 (1.02 to 1.19)* Demographics Gender Female vs male 0.92 (0.70 to 1.21) 1.00 (0.80 to 1.25) 1.17 (0.91 to 1.49) 1.16 (0.92 to 1.46) Age 1.02 (1.01 to 1.03)*** 1.02 (1.00 to 1.03)** 1.01 (0.99 to 1.02) 1.01 (0.99 to 1.02) Race African–American vs Caucasian 1.32 (0.89 to 1.96) 1.41 (0.91 to 2.21) 1.16 (0.75 to 1.80) 1.12 (0.72 to 1.76) Other vs Caucasian 0.73 (0.34 to 1.56) 0.72 (0.32 to 1.66) 1.09 (0.55 to 2.16) 0.98 (0.53 to 1.81) Education High school or less vs more than high school 1.12 (0.84 to 1.50) 1.05 (0.77 to 1.43) 0.86 (0.65 to 1.13) 1.01 (0.76 to 1.33) Comorbidities Any chronic comorbidity 0.69 (0.51 to 0.94)* 0.65 (0.46 to 0.92)* 0.72 (0.53 to 0.98)* 0.76 (0.56 to 1.03) Tobacco use Cigarettes per day 0.98 (0.97 to 0.99)* 0.99 (0.97 to 1.00) 0.98 (0.97 to 1.00) 0.99 (0.98 to 1.00) Time to first cigarette 5 min vs all others 0.92 (0.68 to 1.24) 1.10 (0.82 to 1.48) 0.88 (0.64 to 1.20) 0.85 (0.65 to 1.12) Number of quit attempts 2–5 vs 0–1 0.96 (0.69 to 1.34) 0.98 (0.75 to 1.29) 0.80 (0.58 to 1.10) 0.93 (0.69 to 1.24) 6+ vs 0–1 0.88 (0.57 to 1.35) 0.85 (0.56 to 1.28) 0.68 (0.47 to 0.99)* 0.79 (0.55 to 1.13) Social contact with smokers Work or home vs all others 0.74 (0.56 to 0.97)* 0.95 (0.73 to 1.24) 0.98 (0.74 to 1.30) 0.89 (0.68 to 1.17) Scale scores Motivation to quit 1.24 (1.12 to 1.36)*** 1.24 (1.11 to 1.39)*** 1.22 (1.11 to 1.34)*** 1.23 (1.12 to 1.35)*** Depression 0.87 (0.78 to 0.96)** 0.88 (0.80 to 0.97)** 0.94 (0.83 to 1.06) 0.89 (0.80 to 0.99)* All variables shown were included in each model. Estimates for each model were obtained by combining results across 10 multiply imputed data sets; 95% CIs are shown in parentheses. *p<0.05; **p<0.01; ***p<0.001. TEQ-10, Technology Enhanced Quitline-10. \n\nBODY.RESULTS.TIME TO RELAPSE:\nThere was no significant association between time to relapse and randomisation group (table 5). Other factors related to time to relapse included number of counselling calls, having comorbid health condition, number of quit attempts, motivation to quit and depression. Higher number of counselling calls and higher levels of motivation to quit were associated with being less likely to relapse sooner (HR=0.87, p<0.001; HR=0.84, p<0.001, respectively). By contrast, participants reporting higher levels of depression were more likely to relapse sooner (HR=1.07, p=0.02). Compared to those with no chronic comorbidities, participants reporting one or more chronic comorbidities were more likely to relapse sooner (HR=1.29, p=0.04). Those reporting six or more quit attempts were also more likely to relapse sooner compared to those with 0–1 attempts (HR=1.32, p=0.04). Table 5 HRs from adjusted proportional hazard models of time to relapse, N=1785 Variable HR (95% CI) p Value Randomisation groups Standard plus TEQ-10 vs standard 0.85 (0.69 to 1.04) 0.1171 Standard plus TEQ-20 vs standard 1.05 (0.85 to 1.31) 0.6282 Number of programme counselling calls Planned+participant−initiated 0.87 (0.82 to 0.92)*** <0.0001 Demographics Gender Female vs male 0.90 (0.75 to 1.09) 0.2838 Age 1.00 (0.99 to 1.01) 0.5278 Race African–American vs Caucasian 0.97 (0.75 to 1.26) 0.8400 Other vs White 1.11 (0.67 to 1.84) 0.6762 Education High school or less vs more than high school 1.06 (0.87 to 1.30) 0.5368 Comorbidities Any chronic comorbidity 1.29 (1.01 to 1.64)* 0.0430 Tobacco use Cigarettes per day 1.01 (0.99 to 1.02) 0.0639 Time to first cigarette 5 min vs all others 1.08 (0.88 to 1.34) 0.4430 Number of quit attempts 2–5 vs 0–1 1.14 (0.90 to 1.43) 0.2657 6+ vs 0–1 1.32 (1.02 to 1.72)* 0.0345 Social contact with smokers Work or home vs all others 0.99 (0.80 to 1.24) 0.9713 Scale scores Motivation to quit 0.84 (0.79 to 0.90)*** <0.0001 Depression 1.07 (1.01 to 1.13)* 0.0198 All variables shown were included in each model. Estimates for each model were obtained by combining results across 10 multiply imputed data sets; 95% CIs are shown in parentheses. *p<0.05; **p<0.01; ***p<0.001. TEQ-10, Technology Enhanced Quitline-10. \n\nBODY.RESULTS.USING IVR TO IDENTIFY RISK:\nFigure 2 presents quit outcomes for TEQ-10 and TEQ-20 groups (separately in panel A and combined in panel B) according to their positive screen status (≥1 positive screen vs no positive screen); 70.4% in TEQ-10 and 76.1% in TEQ-20 had ≥1 positive screen during the study. Compared with participants who screened positive in either TEQ group, participants who did not screen positive were more likely to be abstinent at 6 (OR=1.69, p=0.01 for 7-day point prevalence; OR=1.77, p≤0.001 for 30-day point prevalence) and 12 months (OR=1.47, p=0.03 for 7-day point prevalence; OR=1.50, p=0.01 for 30-day point prevalence). Figure 2Abstinence rates for intervention and positive screen groups, N=1785. Estimates for each model were obtained by combining results across 10 multiple-imputed data sets. Error bars indicate 95% CIs. (A) Displays results for the TEQ-10 and TEQ-20 groups separately. (B) Displays data for the two TEQ-groups combined. '1+ positive screen' is defined as exceeding the predetermined risk threshold on any of the IVR assessment questions (topics: lapses, cravings, negative effect, self-efficacy, and motivation to remain quit) on any IVR assessment call. IVR, interactive voice response; TEQ-10, Technology Enhanced Quitline-10; TEQ-20, Technology Enhanced Quitline-20.\n\nBODY.RESULTS.SATISFACTION WITH QFL AND THE IVR PROTOCOL:\nSatisfaction was measured at 6 months after enrolment. Participants completing the 6-month survey (n=1091, 61.1%) were generally satisfied with the overall QFL programme and the counselling they received (96% and 98%, respectively, were somewhat satisfied, satisfied, or very satisfied at 6 months). A similar percentage (98%) would recommend the programme to others. There were no differences in QFL programme satisfaction between the three study groups. Compared to TEQ-10 participants, those receiving TEQ-20 were more likely to report the number of relapse risk assessments were too many (36% vs 23%, p≤0.001) and also were marginally more likely to agree that the risk assessments took too much time (28% vs 23%, p=0.09). The majority of participants reported it was easy to answer questions through the IVR system (95%), that the system was helpful (87%), and that they would recommend the IVR system to others trying to stay quit (86%), with no significant differences between the TEQ-10 and TEQ-20 groups.\n\nBODY.DISCUSSION:\nIn this randomised controlled trial investigating the use of an automated, IVR-supported protocol to identify risk and prevent smoking relapse among recently quit tobacco users enrolled in the QFL programme, there were no differences in tobacco abstinence rates between the control (standard QFL) and technology-enhanced (IVR-delivered risk assessments and transfer for counselling) groups at 6-month and 12-month follow-up. Although the risk assessment and transfer protocols did not yield better abstinence rates, the IVR-delivered screenings did successfully identify participants who were less likely to be quit at follow-up. Participants who did not screen positive during the relapse risk assessments were 77% more likely to be abstinent at 6-month follow-up. We tested two risk screening and transfer protocols of varying intensities. The more intensive screening protocol with 20 IVR-delivered assessments did not demonstrate any benefit over the 10-assessment group, and the TEQ-20 respondents were more likely to indicate there were too many IVR assessments. In addition, we explored whether the TEQ intervention was more effective for certain populations in this study with regard to quit outcomes, by testing interaction effects, but did not identify any significant moderating relationships. There are several possible explanations for why the intervention did not reduce relapse rates. The TEQ interventions were designed to immediately transfer participants who met risk thresholds for tobacco counselling to target the identified relapse risk. About three-quarters (73.3%) of the TEQ participants were identified as being at risk at least one time over the 8-week intervention period. Having a positive screen was predictive of relapse; however, it is possible that other risk factors are more predictive of risk than those included in the assessment. Also, participants identified as at risk by the IVR assessments and automatically transferred to a coach failed to remain on the line to receive cessation counselling at the anticipated rate. That is, less than one third of those in the TEQ arms completed an IVR-transferred counselling call and 41.3% of those with a positive risk screen completed one or more IVR-transferred counselling calls, resulting in participants in the TEQ conditions completing only 0.41 calls, on average, more than the planned programme calls from the standard programme. Thus, it is possible that we failed to observe an effect because we did not increase the actual treatment exposure much above the base rate of calls. It remains unclear why some callers did not accept these additional calls when the satisfaction data showed positive perceptions of the IVR automated assessment system (87% felt it was helpful when they need to talk to a Quit Coach; 86% would recommend). Most callers were still receiving planned standard programme calls at the time of their risk assessments; perhaps callers who had recently spoken to a coach did not feel they needed another call at that time. Future studies should assess reasons for lack of counselling uptake and consider an automated system that would allow participants to schedule a call in the near future or would initiate outbound calls to participants identified as at risk who do not immediately transfer to a counsellor. The study had high quit outcomes, in spite of the fact that the TEQ intervention did not significantly improve outcomes. The 61% respondent and 38% ITT penalised imputation (missing=smoking) 30-day point prevalence quit rates at 6 months in the standard treatment group indicated that tobacco users who receive a multiple-call phone counselling programme, have access to cessation medication benefits, and successfully achieve at least a 24 h abstinence period early in treatment do quite well. These quit rates are high compared to studies focusing on all commercial quitline enrolees (eg, 34.3–41.1% responder quit rate at 6 months27\n28). The relatively strong performance and engagement in standard treatment for this study may have made achieving improvements with a low intensity relapse prevention intervention even more challenging. This finding also aligns with previous recommendations that increasing quit attempts may be one of the most important factors for increasing cessation at the population level,29 and demonstrates the importance of achieving at least 24 h of abstinence. More counselling calls completed, higher motivation to quit, lower depression scores, and not having chronic health condition were consistently related to greater likelihood of being abstinent at follow-up. These variables were also associated with time to relapse; additionally, callers with a greater number of previous quit attempts (6+) were more likely to relapse sooner. These findings are consistent with previous quitline studies which have demonstrated the importance of programme engagement (ie, number of calls completed15\n30\n31). Callers in this study had a high motivation to quit (9 out of 10, on average); a previous study has linked high motivation to quit with making a quit attempt but not with likelihood of remaining quit.32 Moreover, previous work has identified that callers with depressive symptoms33 and chronic health conditions31 may have more difficulty quitting. Dependence level (time to first use and cigarettes per day) was not consistently related to outcomes in this study, which differs from previous literature that has often linked baseline dependence and outcomes.31\n32 Compared with other behavioural relapse prevention research, strengths of this study included a relatively large sample size, use of an innovative and potentially cost-effective technology that could be disseminated to large populations if shown to be effective, and attempting to connect those at risk to immediate counselling. Most previous behavioural relapse prevention studies have focused on skills training and have not yielded significant effects.7 As noted by Hajek et al,7 future relapse prevention research efforts may be best applied in examining new and alternative approaches to those previously studied. One recent small pilot study examined extended IVR assessments following varenicline treatment, and also did not find significant effects for the extended IVR group.12 That article did not report on whether those who screened positive during IVR assessments were more likely to have returned to smoking at follow-up, and likely had too small of sample size to meaningfully do so. Additionally, the McNaughton et al's12 study included an intensive IVR contact schedule with 80 contacts over 40 weeks; low IVR assessment completion rates (37% of IVR calls and 16% of assessments were completed) in that study align with our findings that more contact is not necessarily better. Identifying the most effective and cost-effective number of contacts should continue to be a focus in future studies using IVR technology. The response rates to our 6-month and 12-month outcome surveys were 61% and 59%, respectively, which may limit the validity of our findings. Multiple imputation, a preferred method for ITT analysis, was used to reduce potential non-response bias, and the findings from these analyses agreed closely with results from a sensitivity analysis using respondent-only data. Another potential limitation to our study design is the lack of biochemical verification of abstinence, particularly in light of the high reported abstinence rates. However, our study procedures were consistent with recommendations of the SRNT Subcommittee on Biochemical Verification,34 and the results should be considered with respect to this. Additionally, we have no reason to believe there was a systematic bias in reporting abstinence across groups. Third, the protocol aimed for IVR-triggered counselling calls to focus on the specific risk factor(s) identified by the IVR assessment; all participants received empirically based support at the time of need, but not all calls were focused on the identified risk factor(s). Finally, as with other research on prevention of smoking relapse, the time frame for intervention and data collection poses limitations. The IVR-delivered risk assessments in this study were limited to the first 8 weeks following successful abstinence. Although previous research has identified the first weeks as being the greatest risk for relapse, callers also relapse at later dates. Ongoing plans for prevention of relapse, or risk intervention, may be needed; future research may be able to identify those in need of ongoing relapse risk monitoring and intervention. Augmenting tobacco quitline treatment with IVR-delivered relapse risk screening assessments was acceptable to participants, and successfully identified recently quit tobacco users who may be in need of more intensive or different intervention. More research is needed to identify how to engage at-risk participants in additional treatment. Using a technology-based screening for relapse risk to connect recent tobacco users to cessation counselling at the time of risk could offer a scalable, cost-effective approach for delivering tailored services to those in need, if future research can identify the appropriate services to deliver at that time of need.\n\n**Question:** Compared to Technology Enhanced Quitline-20 what was the result of Technology Enhanced Quitline-10 on Number of positive screens?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
1029
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Tranexamic acid for patients with traumatic brain injury: a randomized, double-blinded, placebo-controlled trial\n\n ABSTRACT.BACKGROUND:\nTraumatic brain injury (TBI) is commonly accompanied by intracranial bleeding which can worsen after hospital admission. Tranexamic acid (TXA) has been shown to reduce bleeding in elective surgery and there is evidence that short courses of TXA can reduce rebleeding in spontaneous intracranial haemorrhage. We aimed to determine the effectiveness and safety of TXA in preventing progressive intracranial haemorrhage in TBI.\n\nABSTRACT.METHODS:\nThis is a double blinded, placebo controlled randomized trial. We enrolled 238 patients older than 16 years with moderate to severe TBI (post-resuscitation Glasgow Coma Scale (GCS) 4 to 12) who had a computerized tomography (CT) brain scan within eight hours of injury and in whom there was no immediate indication for surgery. We excluded patients if they had a coagulopathy or a serum creatinine over than 2.0 milligrams%. The treatment was a single dose of 2 grams of TXA in addition to other standard treatments. The primary outcome was progressive intracranial haemorrhage (PIH) which was defined as an intracranial haemorrhage seen on the second CT scan that was not seen on the first CT scan, or an intracranial haemorrhage seen on the first scan that had expanded by 25% or more on any dimension (height, length, or width) on the second scan.\n\nABSTRACT.RESULTS:\nProgressive intracranial haemorrhage was present in 21 (18%) of 120 patients allocated to TXA and in 32 (27%) of 118 patients allocated to placebo. The difference was not statistically significant [RR = 0.65 (95% CI 0.40 to 1.05)]. There were no significant difference in the risk of death from all causes in patients allocated to TXA compared with placebo [RR = 0.69 (95% CI 0.35 to 1.39)] and the risk of unfavourable outcome on the Glasgow Outcome Scale [RR = 0.76 (95% CI 0.46 to 1.27)]. There was no evidence of increased risk of thromboembolic events in those patients allocated to TXA.\n\nABSTRACT.CONCLUSIONS:\nTXA may reduce PIH in patients with TBI; however, the difference was not statistically significant in this trial. Large clinical trials are needed to confirm and to assess the effect of TXA on death or disability after TBI.\n\nBODY.BACKGROUND:\nEach year, world-wide more than 1.5 million people die and about 10 million people are hospitalized following traumatic brain injury (TBI) [1]. Many survivors experienced long term disability [2]. In Thailand, each year around 8,000 people died following acute TBI and tens of thousands were hospitalized. A follow up study of 418 patients who were admitted to Khon Kaen regional hospital following TBI found that 29% of patients were either dead or severely disabled six months following the injury [3]. TBI is commonly accompanied by intracranial bleeding which occurs in 25% to 45%, 3% to 12% and 0.2% of severe, moderate and mild TBI cases respectively [4]. There were evidences indicating that this bleeding can develop or continue after hospital admission and that larger bleeds have a worse prognosis [5]–[12]. Delayed enlargement of traumatic intraparenchymal contusions and hematomas is the most common cause of clinical deterioration and death in patients who had a lucid interval after TBI [8,13]. A systematic review of randomized controlled trials (RCT) shows that antifibrinolytic agents are effective in reducing bleeding in elective surgery. The review summarized data from 211 RCTs involving 20,781 participants showed that both Aprotinin and Tranexamic acid (TXA) reduced the need for blood transfusion in elective surgery by about one third [RR = 0.61 (95% CI 0.54 to 0.69)] with no evidences of increased adverse effects [14]. If antifibrinolytics reduced intracranial bleeding in patients with TBI, this would have important clinical implications. However, a systematic review of haemostatic drugs in patients with TBI found no RCTs [15]. There were some evidences on the use of TXA in patients with spontaneous aneurysmal intracranial bleeding. A systematic review of RCTs of TXA in aneurysmal haemorrhage found that although rebleeding was reduced by 45% [odds ratio = 0.55 (95% CI 0.42 to 0.71)], this benefit was offset by cerebral ischaemia such that there was no overall patients benefit [16]. However, these trials used high doses TXA for about six weeks and it has been suggested that a shorter and lower dose might prevent rebleeding whilst avoiding the risk of ischaemia. Indeed, since the review was published a trial of early short course (3 days) TXA showed that it reduced rebleeding from 10.8% to 2.4% without ischaemic adverse effects [17]. We conducted a double blind randomized controlled trials to evaluate the effect of early short course TXA on the occurrence of progressive intracranial haemorrhage (PIH) in patients with TBI treated at a large regional trauma centre in rural Thailand.\n\nBODY.METHODS:\nA randomized, double-blind, placebo-controlled, parallel group trial was conducted. The trial protocol was approved by the Khon Kaen hospital's ethics committee and the Khon Kaen University's ethics committee. The trial protocol was registered with http://www.clinicaltrials.gov with the trial identifier NCT00755209. All written informed consent forms were obtained from legally acceptable representative of comatose participants.\n\nBODY.METHODS.PARTICIPANTS:\nAll patients, older than 16 years, with moderate to severe TBI (post-resuscitation Glasgow Coma Scale (GCS) 4 to 12) who had a computerized tomography (CT) brain scan performed within eight hours of injury, and whom there was no immediate indication for surgery, were eligible for inclusion. There were both of isolated TBI and polytrauma patients whom there were critical concern for TBI management during the admission period. Patients were excluded if they were pregnant, had evidences of coagulopathy, known to be receiving a medication which affects haemostasis, or had a serum creatinine over than 2 mg/decilitre. Coagulopathy was considered present if any of the following hematological parameters were observed: (1) platelet count less than 100,000 cells/mm3; (2) Prothrombin time (PT) or international normalized ratio (INR) prolonged more than 1.5 times normal value; (3) activated partial thromboplastin time (aPTT) more than 10 seconds greater than normal value. Coagulopathy was a risk factor for developing PIH and mortality in previous studies [10,11,18]–[22] or recent reports [23,24]. Hence it was confounding factor to be controlled by exclusion.\n\nBODY.METHODS.INTERVENTION:\nPatients were randomly allocated to receive TXA (loading dose of 1.0 gram over 30 minutes followed by a maintenance dose of 1.0 gram infused over eight hours) or matching placebo. The placebo was sterile water and was purchased on the open market in Thailand.\n\nBODY.METHODS.STUDY OUTCOMES:\nThe primary outcome was progressive intracranial haemorrhage. It would have more association to therapeutic effect of given tranexamic acid than other outcomes in this study [25]. Moreover it was a significant source or link to morbidity and mortality in TBI [8,10,18]–[20,26]. PIH was defined as an intracranial haemorrhage seen on the second CT scan that was not seen on the first CT scan, or an intracranial haemorrhage seen on the first scan that had expanded by 25% or more on any dimension (height, length, or width) on the second scan. Progressive pressure effect was defined as either an increase in midline shift of greater than 1 mm or an increase in basal cistern between the first and second CT scan. The second CT scan was to be taken 24 hours ± 8 hours after the first CT scan. Improved Glasgow Coma Scale (GCS) motor score was also recorded to see if there is compatible change between clinical and radiological progression at 24 hours. The presence or absence of PIH was assessed by two independent readers. Both were neurosurgeons at KKH with experiences in reading posttraumatic CT scans. When there was disagreement about the presence or absence of PIH this was resolved by a third neurosurgeon reader. Inter-rater reliability was assessed by kappa statistic. Secondary outcomes were death, functional status assessed using the Glasgow Outcome Scale (GOS) at hospital discharge, blood transfusion, neurosurgical operation and any in-hospital thromboembolic events (myocardial infarction, pulmonary embolism, deep vein thrombosis, and stroke).\n\nBODY.METHODS.SAMPLE SIZE:\nWe planned to randomize approximately 240 patients, 120 to each group. We estimated that the proportion of patients with PIH would be 30% in the placebo group and that TXA would reduce to be 15%. A trial with 240 patients would have about 80% power at the 0.05 level of significance (two-sided test) to detect a treatment effect of this magnitude.\n\nBODY.METHODS.RANDOMISATION AND BLINDING:\nThe randomisation sequence (with a randomly varied block size) was generated from a computer by a person who was not involved with the trial and this sequence was used to prepare the sequentially numbered treatment packs. Whenever an eligible patient was recruited, the recruiting clinician asked that the next sequentially numbered sealed opaque treatment pack be opened and that the trial loading dose and maintenance infusion be prepared and sent to the relevant ward. This preparation was done out of sight of the recruiting clinician and research participants by nurses who were not involved in the trial. Each treatment pack contained unlabelled vials of either drug or placebo. Although drug and placebo vials contained an identical amount of colorless solution, there was a small size discrepancy between the drug and placebo vials. It was for this reason that the vials were enclosed within sequentially numbered sealed opaque envelopes that were opened by nurses who were not involved in the trial. This approach ensured good allocation concealment and also ensured that those caring for the patient and those conducting the trial did not know the assigned treatment. The allocation scheme was kept confidential from all research participants until the end of the study.\n\nBODY.METHODS.STATISTICAL METHODS:\nThe primary analysis was on an intention-to-treat (ITT) basis with complete case analysis for other outcomes and was done using STATA software version 10.0. The presence or absence of PIH was analyzed as a dichotomous variable. The Glasgow Outcome Scale was also dichotomized such that death, persistent vegetative state, and severe disability constituted an unfavorable outcome while favorable outcome included moderate disability and good recovery. We calculated relative risks, risk difference with their 95% confidence intervals and hypothesis testing between the two treatment groups.\n\nBODY.RESULTS.PATIENT RECRUITMENT:\nFigure 1 shows the participant flow into the trial. The first patient was recruited on the 23rd October 2008 and the last patient on 14th August 2009 by which time a total of 238 patients had been included in the trial. All patients received the allocated trial treatments (TXA or placebo) and there were no protocol violations. There were nine patients for whom a second CT scan could not be obtained: seven patients died before the second CT scan, one patient could not be scanned because of agitation, and one patient refused the second scan. There were two consents withdrawal in the placebo group after randomization because they were signed by the unauthorized relatives. The related ethic committees were informed with an agreement for this exclusion. The inter-rater reliability of the assessment of the presence or absence of PIH was high with a kappa statistic of 0.95. The patients were enrolled with comparable profile including about mean age (40 years), male gender (80%), injury onset (within 7 hours), associated organ injury with injury severity score 24 (range from 9–43) and initial haematocrit level (38 volumes%) with moderately severe GCS severity. There were similar pressures effects finding of the first CT scan in both groups. Treatment and control groups were approximately balanced with respect to baseline characteristics (Table 1). Figure 1Participant flow. Table 1 Baseline characteristics Characteristics TXA (n =120) Placebo (n = 118) Age at randomization (years) 34.8 (16.0) 34.1 (15.3) Male 103 (86%) 107 (91%) Glasgow coma scale (GCS) severity: Moderate (9 – 12) Severe (4 – 8) 52 (53%) 47 (47%) 68 (49%) 71 (51%) Baseline haematocrit (volume %) 38 (7.4) 38 (6.7) Time since injury (hours) 6.6 (1.69) 7.1 1.29) Isolated TBI injury 20 (17%) 16 (14%) Polytrauma with TBI injury 100 (83%) 102 (86%) Mean injury severity score (range) 23.3 (9–43) 24.8 (9–43) Midline shift (>3 mm) on first CT (mm) 2 (0.02%) 3 (0.03%) Basal cistern compression on first CT 54 (45%) 53 (45%) Table 2 shows the effect of TXA on the study outcomes by the intention to treat analysis with assuming poor outcome which is used to represent the effectiveness of treatment effect. Progressive intracranial haemorrhage was present in 21 (18%) of patients allocated to TXA and in 32 (27%) of patients allocated to placebo. The difference was not statistically significant [RR = 0.65 (95% CI 0.40 to 1.05)]. The relative risk of death from all causes in patients allocated to TXA compared with placebo was 0.69 (95% CI 0.35 to 1.39) and the relative risk for unfavourable outcome on the Glasgow Outcome Scale was 0.76 (95% CI 0.46 to 1.27). The relative risk of blood transfusion need in patients allocated to TXA compared with placebo was 0.92 (95% CI 0.61 to 1.40). Although we had informed the clinical condition and proposed for emergency neurosurgical operation in all patients with PIH to their relatives. The neurosurgical interventions were not done in placebo group because the patients' relatives did not allow us to perform the operation hence the relative risk for neurosurgical interventions could not be calculated. There were very few adverse effects in both groups. There was no patient in TXA group who developed vascular occlusion event in this study. Table 2 Study outcomes Outcomes TXA Placebo RD RR (n = 120) (n = 118) [95% CI] [95% CI] Progressive intracranial haemorrhage (PIH) 21 (18%) 32 (27%) −0.10 0.65 moderate TBI (n = 24) 7 (6%) 17 (14%) [(−0.20)- [0.40-1.05] severe TBI (n = 20) 9 (8%) 11 (9%) (−0.01)] indicated neurosurgery 6 (5%) 6 (5%) Increase in pressure effect* *11 (10%) 12 (11%) −0.01 0.91 [(−0.09) – 0.07] [0.42-1.97] Improved GCS motor score at 24 hours 37 (31%) 37 (31%) −0.01 0.98 [(−0.12) – 0.11] [0.67-1.44] Neurosurgical intervention 3 (3%) 0 0.03 --- [(−0.00) – 0.05] --- Blood products transfusion 31 (26%) 33 (28%) −0.02 0.92 [(−0.13) – 0.09] [0.61-1.40] Death 12 (10%) 17 (14%) −0.04 0.69 [(−0.13) – 0.04] [0.35-1.39] Unfavorable (GOS) outcome 21 (18%) 27 (23%) −0.05 0.76 [(−0.16) – 0.05] [0.46-1.27] Adverse events Stroke 0 3 --- --- Pulmonary embolus 0 0 --- --- Deep vein thrombosis 0 0 --- --- Gastrointestinal bleeding 0 1 --- --- *Denominator for outcomes is 114 for TXA group and 115 for placebo group. This is an analysis based on complete case analysis that is not assumed the missing outcome. Table 3 shows the effect of various methods of handling missing response in the primary outcome. The different analyses have different assumptions on existing PIH. A complete case analysis may be undertaken for primary outcome to avoid the assumption about PIH without imputation for the unavailable outcome. However different analyses give similar trend of potential benefit for treatment with tranexamic acid. Table 3 Effect of various methods of handling missing response in trial Methods of analysis PIH in TXA and placebo group Group n/N Rate RD RR (person) (%) [95% CI] [95% CI] 1. Complete case analysis TXA 16/115 14% −0.11 0.57 Placebo 28/114 25% [(−0.21)-(−0.01)] [0.32-0.99] 2. Assuming poor outcome TXA 21/120 18% −0.10 0.65 Placebo 32/118 27% [(−0.20)-(−0.01)] [0.40-1.05] 3. Assuming good outcome TXA 16/120 13% −0.10 0.56 Placebo 28/118 24% [(−0.20)-(−0.01)] [0.32-0.98] 4. Extreme case favoring placebo TXA 21/120 18% −0.06 0.74 Placebo 28/118 24% [(−0.16)-0.04] [0.44-1.22] 5. Extreme case favoring TXA TXA 16/120 13% −0.14 0.49 Placebo 32/118 27% [(−0.24)-(−0.04)] [0.29-0.85] \n\nBODY.DISCUSSION.PRINCIPAL FINDINGS:\nWe found a statistically insignificant difference but with a trend towards significant reduction in the risk of PIH in patients allocated to TXA without evidence of increased risk of thrombotic adverse events. The risks of death and of unfavorable outcome on the Glasgow Outcome Score were lower for patients allocated to TXA. The safety of early short course treatment of TXA in our TBI patients was compatible to no increasing risk of non-fatal vascular occlusive events with early short course treatment of TXA in traumatic bleeding patients in CRASH2 trial [27].\n\nBODY.DISCUSSION.STRENGTHS AND WEAKNESSES:\nThis is the leading randomized trial to examine the effectiveness of the early administration of a short course of TXA in patients with acute TBI. We are aware of a recent study publication of CRASH2 trial and we anticipate that the results of all relevant trials will ultimately be combined in the Cochrane systematic review of haemostatic drugs for TBI [15]. Our trial was properly randomized with good allocation concealment. The timing of the second CT scan was pre-specified in the protocol and all outcome measurements were made without knowledge of treatment allocation. The main weakness of our study is the low power to estimate the effect of TXA on clinical outcomes. In particular, although our study has shown statistically insignificant difference that TXA may reduce PIH, because of the low powers to examine the effect of TXA on death and on disability, the clinical implications of our findings are still limited. Another limitation is lacking to explore an effect of associated injury or injury severity score in multiple trauma patients which was concerned in some studies [19,21]. We precluded exploring this association to the treatment by our limited sample size. Polytrauma subjects were almost patients whom there was agreement from trauma care team to concern treatment for TBI as same as isolated TBI subjects at first moment. Therefore we assigned serious, critical and serious injury degree for moderate TBI, severe TBI and other anatomical organ injuries in the category of injury severity score assessment respectively. The common associated injuries were extremity fracture, facial fracture, rib or clavicular fracture and a few cases of other chest injury or minor abdominal trauma. Finally the exclusion of coagulopathy is also a limitation of our study and this subject could be a point to explore in further research.\n\nBODY.DISCUSSION.PIH IN TBI:\nVarious terms have been used to describe the development or enlargement of intracranial bleeding after TBI. Terms include delayed traumatic intracranial haemorrhage (DTICH) [7], expanding hematoma [19] and progressive hemorrhagic injury [10]. In this paper we combined these concepts such that we included both new haemorrhage and expanding haemorrhage (progressive intracranial haemorrhage). Our rationale was that both lesions may exacerbate intracranial hypertension and the occurrence of both may be affected by the administration of TXA. Although patients were excluded if they had evidence of coagulopathy at baseline the proportion of patients with PIH among the patients was surprisingly high. Had we not excluded coagulopathic patients it is likely that the proportion of patients with PIH would have been even higher. Previous estimates of the occurrence of PIH vary widely from 7% to 60% [20]. These differences are likely to reflect difference in timing of the CT scan, clinical setting and diagnostic criteria. Nevertheless, given the relatively high prevalence of PIH in patients with TBI observed in this study the potential for TXA to improve clinical outcomes could be high.\n\nBODY.DISCUSSION.METHODS OF HANDLING MISSING RESPONSE IN TRIAL:\nWe summarize the result of handling missing outcome for the primary outcome in Table 3.\n\nBODY.CONCLUSIONS:\nWe have not shown that TXA improves clinical outcomes and this information would be required in order to make any recommendation about the use of TXA in clinical practice. Our results have important implications for research. If an early short course of TXA could be demonstrated to improve clinical outcomes after TBI without any important adverse effects, then this treatment, because it is cheap and widely practicable, could contribute importantly to reducing mortality and disability after TBI.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nSY, BT, NP and PL designed, analysed and wrote the paper. WK and PP suggested the design, collected the data and conducted analyses. SY wrote the paper with input from all other authors. All authors had full access to all the data in the study and had final responsibility for the decision to submit for publication. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-227X/13/20/prepub\n\n**Question:** Compared to Placebo what was the result of Tranexamic acid on Intracranial haemorrhage?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
1160
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Pioglitazone Decreases Hepatitis C Viral Load in Overweight, Treatment Naïve, Genotype 4 Infected-Patients: A Pilot Study\n\n ABSTRACT.BACKGROUND:\nInsulin resistance (IR) is induced by chronic hepatitis C virus (HCV) genotypes 1 and 4 infections. It is not known whether drugs that affect IR such as Pioglitazone and Prednisone also affect serum HCV RNA titers independently of PEG-Interferon-α2/ribavirin treatment. The primary aim was to assess whether Pioglitazone by improving IR and/or inflammation decreases HCV viral load independently of standard of care HCV treatment. A secondary aim was to assess whether Prednisone, a drug that induces insulin resistance and stimulates HCV viral entry and replication in replicon culture systems, increases HCV viral load in this population.\n\nABSTRACT.METHODOLOGY/PRINCIPAL FINDINGS:\nWe designed a two-arm, parallel Pilot Study of overweight, treatment naïve genotype 4 HCV-infected patients at a public referral Liver Clinic in Giza, Egypt. The subjects received Pioglitazone (30 mg/day for 14 days) or Prednisone (40 mg/day for 4 days) in a randomized fashion, but the two arms can be considered independent pilot studies. Only changes from baseline within each arm were assessed and no contrasts of the interventions were made, as this was not an aim of the study. Among 105 consecutive HCV genotype 4 patients, 39 were enrolled based on the optimal sample size and power analysis according to the CONSORT statement; 20 to the Pioglitazone group and 19 to the Prednisone group. Pioglitazone was effective in decreasing serum HCV RNA at day-14 (n = 10; difference of means = 205,618 IU/ml; 95% CI 26,600 to 384,600; P<0.001). Although Prednisone did increase serum HCV RNA at day-4 (n = 10; change from baseline = −42,786 IU/ml; 95% CI −85,500 to −15,700; P = 0.049), the log10 HCV RNA titers were statistically not different from baseline day-0.\n\nABSTRACT.CONCLUSION/SIGNIFICANCE:\nThis is the first documentation that Pioglitazone decreases the serum HCV RNA titers independently of PEG-Interferon-α2/ribavirin treatment. The novel findings of our Study provide the foundation for basic and clinical investigations on the molecular mechanisms responsible for the Pioglitazone-induced decrease in HCV genotype 4 RNA titers.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov NCT01157975\n\n\nBODY.INTRODUCTION:\nChronic hepatitis C viral (HCV) infection markedly increased the risk for type 2 diabetes in cohorts with recognized diabetes risk factors [1]. A meta-analysis also found that HCV infection is associated with excess type 2 diabetes risk [2]. Moreover, IR is a specific feature of chronic HCV genotypes 1 and 4 infections since its prevalence is ∼7-fold higher among these patients compared to chronic Hepatitis B-infected patients [3], and changes in IR occur among patients with chronic HCV infection during and after treatment of HCV with PEG-interferon-α and ribavirin [4]. In patients with genotypes 1, 2 and 3 increased IR was associated with decreased sustained virological response (SVR) [5]. Among patients infected with genotype 1, decreases in Homeostasis Model Assessment (HOMA)-IR correlated with obtaining an SVR and increases in HOMA-IR were observed in patients who relapsed [6]. Similarly, IR may correlate with the virological response to standard of care treatment of HCV genotype 4 patients in Egypt [7]. Because Pioglitazone, a peroxisome proliferator-activated receptor (PPAR)-γ agonist, improves insulin sensitivity [8] and it is also effective in controlling liver inflammation in subjects with non-alcoholic steatohepatitis (NASH) [9], it has been used as triple therapy in addition to PEG-interferon-α and ribavirin to treat HCV-infected patients. The study in HCV genotype 1 non-responders was terminated when none of the five patients responded to the triple therapy at week 12 [10]. In contrast, Pioglitazone improved virological response to PEG-interferon α-2b/ribavirin combination therapy in hepatitis C genotype 4 treatment naïve patients with insulin resistance [11]. However, the mechanisms by which Pioglitazone influenced SVR in treatment naïve patients with HCV genotype 4 infection remain unknown. It is unknown whether Pioglitazone modifies the bioavailability of PEG-interferon α-2b/ribavirin or directly or indirectly affects HCV infection and replication. Therefore, we designed a Pilot Study to assess whether Pioglitazone decreases HCV viral load in overweight HCV genotype 4, treatment naïve patients independently of standard of care PEG-interferon α-2b/ribavirin treatment. A secondary end-point was to analyze whether Prednisone, a drug that induces insulin resistance in susceptible patients [12] and stimulates HCV viral entry and replication in replicon culture systems [13], increases HCV viral load in this population. There were no data available on the effect of Pioglitazone or Prednisone on HCV RNA viral load in patients. Here, we provide the first documentation that Pioglitazone decreases the serum HCV genotype 4 RNA titers independently of treatment with PEG-Interferon-α2/ribavirin. Prednisone did increase serum HCV RNA viral load at day-4 in this cohort.\n\nBODY.METHODS:\nThe protocol for this trial and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1.\n\nBODY.METHODS.PARTICIPANTS:\nEligible participants were all overweight (BMI>25 kg/m2), given their susceptibility for IR, adults age 25 to 65 years, with active HCV genotype 4 chronic infection (>10,000 IU/ml viral load) who had a liver biopsy within two years of recruitment (added to the protocol), were treatment naïve (never treated for HCV infection) and met the eligibility criteria for the Study. Exclusion criteria included cirrhosis, the presence of other liver diseases, diabetes, pregnancy, significant clinical co-morbidities, and the inability to provide informed consent. Participants were recruited from July 2010 to February 2011 at the primary care referral of subjects of a disadvantaged social-economical group at the Agouza Hospital in Giza, Egypt. The estimated prevalence of Hepatitis C in Egypt is 12% in adults.\n\nBODY.METHODS.ETHICS:\nWritten informed consent was obtained by the medical staff before entry into the Study. The study was registered at the clinicaltrials.gov website (NCT-01157975) and received Human Subjects' Research approval at the University of California, San Diego (UCSD Human Research protections Program) and at the Agouza Hospital, Giza, Egypt (Director's Research Board) before starting the study. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki.\n\nBODY.METHODS.OBJECTIVES:\nBecause the objective of the Study was to determine whether Pioglitazone and Prednisone treatment would affect the HCV RNA viral load (decrease and increase it, respectively), and these were not a treatment of the HCV infection in this situation, we decided not to use a placebo since we utilized de-identified samples to blinding the investigators analyzing the outcomes.\n\nBODY.METHODS.INTERVENTIONS:\nEligible subjects were randomly assigned to receive orally Pioglitazone 30 mg/day for 14 days or Prednisone 40 mg/day for 4 days. These medications were taken in the morning. Visits were at screening (up to 30 days before randomization); randomization (day -2); baseline visit (day 0); and follow-up visits for the Pioglitazone group (on days 2, 4, 7 and 14) and the Prednisone group (on days 2 and 4). For safety issues subjects were continued to be seen off protocol at day-7 post-treatment. Blood samples were obtained at each visit for HCV RNA viral load and for inflammatory, adipogenic, and insulin resistance markers. In addition, a 2-hr oral 75 g glucose tolerance test (OGTT) was performed after a 10-hr fast at day-0 before administration of the medication and at the last day of treatment (day-14 for Pioglitazone and day-4 for Prednisone). On all other visits, patients were also fasting. Patients received the first dose of the study drug (oral Pioglitazone 30 mg or Prednisone 40 mg) after blood samples were obtained on day 0. Compliance with the medication was >90% as assessed at each visit.\n\nBODY.METHODS.OUTCOMES:\nThe primary outcome with respect to the effect of Pioglitazone in HCV genotype 4 infections was the decrease in HCV RNA viral load at day 14. The secondary outcome with respect to the effect of Prednisone in HCV genotype 4 infections was the increase in HCV RNA viral load at day 4 instead of at day 14 for safety reasons given the potential adverse events associated with longer corticosteroid treatment. Additional secondary outcome measures were insulin resistance (oral glucose tolerance test; serum insulin, C-peptide; HOMA-IR; insulin sensitivity check index (QUICKI); HOMA-B; sex hormone BG (SHBG); resistin; adiponectin) and liver injury (alanine aminotransferase (ALT); aspartate aminotransferase (AST)) and inflammation (serum interleukin (IL)-8, IL-28B, tumor necrosis factor (TNF), IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, monocyte chemokine protein (MCP) and hepatocyte growth factor (HGF)). De-identified blood samples from all the enrolled subjects that received the treatments were analyzed for the standard clinical laboratory tests and for HCV RNA viral load using PCR amplification (COBAS, TACKMAN) (lowest detectable limit: 50 IU/ml). A multiplex peptide detection system (Quansys Biosciences, Logan, UT) was utilized according to the manufacturer's protocol to determine inflammatory, adipogenic, and insulin resistance markers. Values calculated from individual pixels using the Q-View Imager system. An enzyme-linked immunosorbent assay kit was used to determine human adiponectin according to the manufacturer's protocol (BioVision, Mountain View, CA). Surrogate indicators of insulin resistance and β-cell function were analyzed as described previously [14]. The indicators of insulin resistance HOMA1-IR and QUICKI were determined. HOMA1-IR = (FSI×FSG)/22.5 where FSI is fasting serum insulin concentration (μU/l) and FSG is fasting serum glucose (mmol/l). QUICKI = 1/[log (fasting insulin-μU/ml)+log (fasting glucose- mg/dl)]. β-Cell function was assessed by the HOMA –B 20.I0/(G0 -3.5); I0 = fasting insulin in μU/ml and G0 = fasting glucose in mmol/l [14].\n\nBODY.METHODS.SAMPLE SIZE:\nThere were no data available on the effect of Pioglitazone or Prednisone on HCV RNA viral load. However, we considered a reduction of 25% in HCV RNA viral load in 14 days to be relevant as proof of principle of the effects of Pioglitazone on HCV RNA titers. To detect a reduction of 25% in HCV RNA in the Pioglitazone group at day-14 assuming a standard deviation of 20% with a two-sided 5% significance level and a power of 90%, a sample size of 9 patients per group completing the study was necessary, and it was decided to enroll at least 19 subjects per group given an anticipated dropout rate of 50% mainly prior to dosing. Because the cultural environment (unwillingness to tell the research staff that they cannot participate in the study) and the social conditions (such as severe problems with transportation and employer's authorization) were expected to prevent a significant proportion of patients to attend the Clinic for follow-up visits after randomization, a larger number of eligible subjects were randomized to achieve the desirable number of subjects completing the Study. To recruit this number of patients a 12-month inclusion period was anticipated. No interim analysis was performed during the trial. An intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication (10 patients in each group was optimal based on sample size and power analysis).\n\nBODY.METHODS.RANDOMIZATION—SEQUENCE GENERATION:\nParticipants were randomly assigned following simple randomization procedures (computerized random numbers) to one of two treatment groups. Independent pharmacists dispensed either Pioglitazone or Prednisone according to a computer generated randomization list.\n\nBODY.METHODS.RANDOMIZATION—ALLOCATION CONCEALMENT:\nThe decision to accept or reject a participant was made, and informed consent was obtained from the participant, in ignorance of the next assignment in the sequence.\n\nBODY.METHODS.RANDOMIZATION—IMPLEMENTATION:\nBlock randomization was by a computer generated random number list prepared by a pharmacist with no clinical involvement in the trial. After the research nurse had obtained the subject's consent, she contacted the pharmacist who was independent of the recruitment process for allocation consignment.\n\nBODY.METHODS.BLINDING:\nWhereas patients and physicians allocated to the intervention groups were aware of the allocated arm, outcome assessors and data analysts of primary and secondary end-points were kept blinded to the allocation.\n\nBODY.METHODS.STATISTICAL METHODS:\nAn intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication. The primary and secondary endpoints were assessed with a paired-sample t-test. We also explored correlations between IL8 and TNFα using Pearson's correlation coefficient with 95% confidence intervals. The significance level was fixed at α = 5% for all tests. All analyses were carried using R version 2.12.2 (2011 Vienna, Austria; http://www.R-project.org).\n\nBODY.RESULTS.PARTICIPANT FLOW:\nThe optimal subjects' recruitment number was pre-determined by sample size and power analysis as described above. One hundredth and five consecutive patients with chronic Hepatitis C genotype 4 referred to the Hepatology Clinic were assessed for eligibility (\nFig. 1\n). Among these 105 patients, 66 were excluded because they did not meet the eligibility criteria (42 subjects), they refused to participate (8 subjects), or for other reasons (16 subjects). Therefore, 39 patients were randomized; 20 to the Pioglitazone group and 19 to the Prednisone group. Ten patients in each group received the allocated intervention, while 10 patients in the Pioglitazone group and 9 patients in the Prednisone group were unable to participate because of their inability to attend the Clinic visits. These patients did not receive the allocated intervention (\nFig. 1\n). The randomized groups that received the allocated intervention were balanced in regards to age, gender, HCV genotype and titers and the severity of their liver disease (\nTable 1\n). 10.1371/journal.pone.0031516.g001Figure 1Participant Flow.Patients with chronic Hepatitis C genotype 4 referred to the Hepatology Clinic were assessed for eligibility as described in Methods. 10.1371/journal.pone.0031516.t001 Table 1 Baseline demographic and clinical characteristics of treated subjects. Pioglitazone (N = 10) Prednisone (N = 10) \n P value \n \n Age (years) 44 (9.3) 42(10.4) 0.592 \n Sex (male) 7 (70%) 8 (80%) 0.429 \n Ethnic origin: Arab \n 10 (100%) 10 (100%) 1.000 \n Height (cm) 173 (5.8) 175(8.7) 0.458 \n Weight (kg) 88.3 (9.2) 85.7(9.9) 0.550 \n BMI (kg/m 2 ) 29.5(1.5) 27.9(1.6) 0.027 \n HCV Genotype 4 \n 10 (100%) 10 (100%) 1.000 \n HCV RNA (log 10 ) 5.41(0.80) 5.10(0.92) 0.432 \n Albumin (g/dl) 3.8(0.3) 3.8(0.3) 0.939 \n Total bilirubin (mg/dl) 0.97(0.15) 0.90 (0.16) 0.319 \n ALT (IU/ml) 63 (23) 53 (14) 0.283 \n AST (IU/ml) 54 (17) 52 (11) 0.795 \n Metavir score: \n \n A (inflammation) A1(n:5) A2(n:5) A1(n:5) A2(n:5) 1.000 \n F (fibrosis) F1(n:7); F2(n:3) F1(n:6); F2(n:4) 0.639 Data are means (SD) or numbers (%). The patients that did receive the allocated intervention were similar in demographics, ethnicity and degree of liver disease to those that did not receive the allocated intervention (\nTable 2\n). By chance allocation, the serum ALT was lower in the group that received Prednisone than in the group that did not received the allocated Prednisone (53 vs 69 IU/ml; P = 0.012) (\nTables 1\n\nand\n\n2\n). 10.1371/journal.pone.0031516.t002 Table 2 Baseline demographic and clinical characteristics of non-treated subjects. Pioglitazone (N = 10) \n P value \n Prednisone (N = 9) \n P value \n \n Age (years) 39 (4.2) 0.139 43(11.2) 0.842 \n Sex (male) 6 (60%) 0.660 5 (55%) 0.277 \n Ethnic origin: Arab \n 10 (100%) 1.000 9 (100%) 1.000 \n Height (cm) 172 (5.9) 0.706 177(8.1) 0.612 \n Weight (kg) 85.7 (7.6) 0.499 84.8(8.8) 0.837 \n BMI (kg/m 2 ) 28.8(2.1) 0.402 28.2(3.7) 0.818 \n HCV Genotype 4 \n 10 (100%) 1.000 9 (100%) 1.000 \n ALT (IU/ml) 68 (10) 0.536 69 (10) 0.012 \n AST (IU/ml) 63 (12) 0.188 57 (8) 0.278 \n Metavir score: \n \n A (inflammation) A1(n:4); A2(n:6) 0.673 A1(n:4); A2(n:5) 0.821 \n F (fibrosis) F1(n:5); F2(n:5) 0.388 F1(n:5); F2(n:4) 0.855 Data are means (SD) or numbers (%). P values are for comparisons between treated and untreated subjects within the same arm. No participant was excluded after randomization because they were found not to meet eligibility criteria. All the patients allocated to the interventions (10 in each arm) completed the study (\nFig. 1\n).\n\nBODY.RESULTS.BASELINE DEMOGRAPHIC AND CLINICAL CHARACTERISTICS:\nAn intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication; all of these patients had a subsequent efficacy observation (10 patients in each group). As outlined in \nTable 1\n, all patients were middle age Egyptian Arabs, predominantly males that had HCV genotype 4 with mild or moderate inflammation (A1 and A2, respectively) and mild or moderate fibrosis (F1 and F2, respectively) on their liver biopsy. They had normal albumin and total bilirubin and mild increase in ALT/AST. All subjects were overweight and five were obese (body mass index, BMI; the weight in kg divided by the square of the height in meters was >29 kg/m2: 3 in the Pioglitazone group and 2 in the Prednisone group). By chance allocation, the BMI was higher in the group receiving Pioglitazone than in the group receiving Prednisone (29.5 vs 27.9 kg/m2; P = 0.027) (\nTable 1\n). There was no indication that significant chance imbalances had occurred between the two groups. Further, no direct analysis between the two groups was intended for the primary or secondary end points. Rather, a comparison between baseline and post-treatment results within each group was the Pilot Study's intent.\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON SERUM HCV RNA TITERS:\nPioglitazone decreased serum HCV RNA titers in overweight HCV genotype 4, treatment naïve patients. In the Pioglitazone-treated group, the log10 transformed HCV RNA titers were statistically different from baseline day-0 (5.41+/−0.80) on day-14 (5.24+/−0.79; P<0.001) and day-7 (5.30+/−0.78; P = 0.009) (\nTable 3\n), providing evidence that Pioglitazone was effective in decreasing HCV RNA in this cohort of patients. Pioglitazone treatment decreased the HCV RNA from baseline in 9 out of 10 subjects at day-7. The log10 HCV RNA values were also decreased by day-4 but they were statistically not different from baseline (5.38+/−0.78; P = 0.059). The log10 HCV RNA values were not affected at day-2 by Pioglitazone treatment. Further, the decrease in the log10 HCV RNA between day-7 and day-14 was highly significant (mean change = 0.063; 95% CI, 0.045; P<0.001) (\nTable 3\n). 10.1371/journal.pone.0031516.t003 Table 3 Effects of Pioglitazone on serum HCV RNA titers. Day HCV (IU/ml ×10 −3 ) \n P value \n \n 95% CI \n HCV log 10 \n \n P value \n \n 95% CI \n \n 0 \n 608.2(696.4) 5.41(0.80) \n 2 \n 586.9(663.9) 0.178 −11.5, 53.9 5.41(0.78) 0.854 −0.029 \n 4 \n 531.5(553.6) 0.151 −33.6, 186.9 5.38(0.78) 0.077 −0.002 \n 7 \n 455.3(504.4) 0.044 4.8, 300.7 5.30(0.78) 0.001 −0.156 \n 14 \n 402.5(452.4) 0.029 26.6, 384.6 5.24(0.79) <0.001 −0.215 Values are mean (SD) of HCV RNA titers determined as described in Methods . In spite of the variable baseline HCV RNA viral load, Pioglitazone also decreased significantly the absolute HCV RNA levels at day-14 (mean change = 205,618 IU/ml; P = 0.028) and at day-7 (mean change = 152,772 IU/ml; P = 0.044) (\nTable 3\n).\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON INDICES OF INSULIN RESISTANCE:\nPioglitazone treatment decreased by day-14 fasting serum glucose (Glucose-0) (mean change = 8.2 mg/dL; P = 0.024) as well as the 2-hr glucose (Glucose-120 min) after an OGTT (mean change = 18.8 mg/dL; P = 0.001) (\nTable 4\n). 10.1371/journal.pone.0031516.t004 Table 4 Effects of Pioglitazone on glucose homeostasis, insulin resistance and secretion. Test Day-0 Day-14 \n P value \n \n 95% CI \n \n Glucose-0 (mg/dL) 105 (11.4) 96.8 (6.8) 0.024 1.3, 15.1 \n Glucose-120 (mg/dL) 120.6 (11.9) 101.8 (7.4) 0.0004 10.9, 26.7 \n HOMA-IR \n 2.71 (1.61) 2.38 (1.60) 0.326 −0.95 \n QUICK-I \n 0.339 (0.031) 0.348 (0.036) 0.587 \n …… \n \n HOMA-B \n 92.65 (57.5) 115.26 (90.85) 0.52 −99.6, 54.4 \n Insulin (µIU/ml) 10.4 (6.2) 10.1 (7.0) 0.914 −6.2, 6.8 \n C-peptide (nmol/L) 6.4 (9.4) 9.6 (8.5) 0.278 \n − 9.5, 3.1 \n SHBG (µg/ml) 3.8 (2.4) 1.7 (2.1) 0.062 −140 \n Adiponectin (µg/ml) 3.19 (1.32) 3.64 (1.71) 0.512 \n ……‥ \n \n Resistin (ng/ml) 2,0 (0.2) 1,7 (0.6) 0.116 −2.0 Values are mean (+/− SD). Tests were performed as described in Methods . However, Pioglitazone treatment did not affect by day-14 the following surrogate indicators of insulin resistance and secretion: HOMA-IR; QUICK-I; and HOMA-B. In addition, other surrogate indicators of insulin resistance were not affected by Pioglitazone treatment on day-14, including fasting serum insulin; C-peptide; SHBG; resistin; and adiponectin (\nTable 4\n).\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON INDICATORS OF LIVER INJURY AND INFLAMMATION:\nElevated serum ALT and AST values are surrogate indicators of liver injury used in patient care and clinical research [9]. Pioglitazone treatment decreased at day-14 serum ALT levels (mean change = 7.5 IU/ml; 95% CI: 2.1 to 12.9; P = 0.012) and to a smaller degree serum AST levels (mean change = 2.6 IU/ml; 95% CI: −0.6 to 5.8; P = 0.096) (\nTable 5\n). 10.1371/journal.pone.0031516.t005 Table 5 Effects of Pioglitazone on liver injury indicators. Day ALT (IU/L) \n P value \n \n 95% CI \n AST (IU/L) \n P value \n \n 95% CI \n \n 0 \n 62.7 (23.1) 53.6 (15.6) \n 14 \n 55.2(18.7) 0.0118 2.1, 12.9 51.0 (14.0) 0.0963 −0.6, 5.8 Values are mean (+/− SD). ALT and AST were determined as described in Methods . It has been suggested that serum IL-8 values are elevated in HCV infection and that these values may reflect the effects of the HCV on il-8 gene transcription [15]. Pioglitazone increased serum IL-8 levels at day-14 when compared to baseline day-0 values (mean change = −166.8 IU/ml; 95% CI: −300.1 to −33.5; P = 0.019) (\nTable 6\n). Given the possible induction of IL-8 expression by TNF-α [15], we analyzed post-hoc their correlation by a Pearson's correlation coefficient. We found that there was a relatively strong correlation between serum IL-8 and serum TNF-α levels for day-0 and day-14 (n = 20; r = 0.64; 95% CI = 0.28 to 0.85; P = 0.002). 10.1371/journal.pone.0031516.t006 Table 6 Effects of Pioglitazine on serum cytokines. Serum Cytokine Day-0 Day-14 \n P value \n \n 95% CI \n \n IL-8 (pg/ml) 181.0 (182.8) 347.8 (123.9) 0.020 −300, −33.5 \n IL-28B (pg/ml) 837.2 (1,068) 1,597 (955.5) 0.060 −1,564, 47 \n TNF-α (pg/ml) 92.9 (139.6) 275.5 (268.5) 0.056 −371.7, 6.6 \n IL-1α (pg/ml) 136.6 (324.4) 223.6 (452.1) 0.415 −317, 143 \n IL-1β (pg/ml) 627.4 (1,046) 724.0 (1,098) 0.748 −760, 566 \n IL-2 (pg/ml) 13.9 (11.1) 16.1 (10.1) 0.632 −12.3, 7.9 \n IL-4 (pg/ml) 10.9 (4.9) 10.4 (4.3) 0.817 −4.8, 5.9 \n IL-6 (pg/ml) 337.9 (454.8) 560.5 (482.1) 0.280 −661, 216 \n IL-10 (pg/ml) 32.4 (58.2) 27.6 (44.6) 0.831 −44.7, 54.3 \n IFN-γ (pg/ml) 5.7(4.2) 7.4 (4.6) 0.403 −5.9, 2.6 \n MCP-1 (pg/ml) 338.4(151.4) 330.6(157.9) 0.915 155, 170 \n HGF (pg/ml) 1008 (736) 975 (603) 0.914 −665, 732 Values are mean (+/− SD). Serum cytokines were determined as described in Methods . We found no statistically significant differences between day-0 and day-14 in serum IL-28B, TNF-α, IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, MCP-1 and HGF (\nTable 6\n).\n\nBODY.RESULTS.EFFECT OF PREDNISONE ON SERUM HCV RNA:\nPrednisone increased the absolute HCV RNA level at day-2 (mean change = −35,636 IU/ml; 95% CI −64,665 to −6,607; P = 0.021) and at day-4 (mean change = −42,786 IU/ml; 95% CI −85,556 to −15.7; P = 0.049) (\nTable 7\n), providing some evidence that Prednisone may be effective in increasing HCV RNA in this cohort of patients for that short period of time. When compared to their baseline day-0 values, eight of 10 subjects had an increase in HCV RNA at day-4. Prednisone treatment also increased HCV RNA from baseline day-0 in 9 of 10 subjects at day-2 (\nTable 7\n). However, in the Prednisone-treated group, the HCV RNA titers expressed as log10 values on day-2 and day-4 were not statistically different from baseline day-0 (\nTable 7\n). 10.1371/journal.pone.0031516.t007 Table 7 Effects of Prednisone on serum HCV RNA titers. Day HCV (IU/ml ×10 −3 ) \n P value \n \n 95% CI \n HCV log 10 \n \n P value \n \n 95% CI \n \n 0 \n 498.0(805.3) 5.10(0.92) \n 2 \n 533.7(834.4) 0.0215 −64.6,−6.6 5.14(0.94) 0.059 0.08 \n 4 \n 540.8(855.2) 0.0499 −85.5,−15.7 5.13(0.92) 0.18 0.03 Values are mean (SD). HCV RNA titers were determined as described in Methods . \n\nBODY.RESULTS.EFFECT OF PREDNISONE ON INDICES OF INSULIN RESISTANCE:\nPrednisone treatment increased by day-4, when compared to baseline day-0 values, fasting serum glucose (Glucose-0) (mean change = 9.0 mg/dL; 95% CI: 20.4 to −2.4; P = 0.108) as well as the 2-hr (Glucose-120 min) after an OGTT (mean change = 11.0 mg/dL; 95% CI: 17.9 to 4.1; P = 0.006). Collectively, the CI and t-tests suggest that Prednisone increased serum glucose levels at day-4 (\nTable 8\n). However, Prednisone treatment did not affect by day-4 other surrogate indicators of insulin resistance and secretion (\nTable 8\n). 10.1371/journal.pone.0031516.t008 Table 8 Effects of Prednisone on glucose homeostasis, insulin resistance and secretion. Test Day-0 Day-4 \n P value \n \n 95% CI \n \n Glucose-0 (mg/dL) 88.0 (14.7) 97.0 (12.9) 0.107 −20.4, 2.4 \n Glucose-120 (mg/dL) 96.4 (10.5) 107.4 (12.3) 0.0055 −17.9, −4.1 \n HOMA-IR \n 1.73 (1.06) 2.55(1.96) 0.162 −1.06 \n QUICK-I \n 0.358(0.022) 0.345 (0.032) 0.275 \n HOMA-B \n 95.48 (49.35) 115.86 (95.96) 0.623 −20.38 \n Insulin (µIU/ml) 7.5(3.9) 10.5 (8.2) 0.374 −10.2, 4.4 \n C-peptide (nmol/L) 11.3 (8.9) 4.1 (1.3) 0.061 −0.5, 14.9 \n SHBG (µg/ml) 3.4(2.5) 2.7 (2.2) 0.591 −2.4 \n Adiponectin (µg/ml) 5.9 (8.2) 2.6 (2.1) 0.306 −4.5 \n Resistin (ng/ml) 3.4 (2.5) 2.7 (2.2) 0.591 −2.4 Values are mean (+/− SD). Tests were performed as described in Methods . \n\nBODY.RESULTS.EFFECT OF PREDNISONE ON INDICATORS OF LIVER INJURY AND INFLAMMATION:\nPrednisone treatment did not affect at day-4 either serum ALT, AST, IL-8 or other inflammatory mediators (\nTables 9\n\n and \n\n10\n). 10.1371/journal.pone.0031516.t009 Table 9 Effects of Prednisone on liver injury indicators. Day ALT (IU/L) \n P value \n \n 95% CI \n AST (IU/L) \n P value \n \n 95% CI \n \n 0 \n 53.2 (14.4) 52.0(11.3) \n 4 \n 55.7 (15.7) 0.080 5.4, −0.4 52.2 (12.8) 0.896 3.6, −3.2 Values are mean (+/− SD). ALT and AST were determined as described in Methods . 10.1371/journal.pone.0031516.t010 Table 10 Effects of Prednisone on serum cytokines. Serum Cytokine Day-0 Day-4 \n P value \n \n 95% CI \n \n IL-8 (pg/ml) 123.1 (163.5) 159.5 (185.1) 0.353 −123, 50 \n IL-28B (pg/ml) 485.9(876.4) 610.8 (857.7) 0.120 −292, 42 \n TNF-α (pg/ml) 146.6 (268.8) 146.1 (284.4) 0.996 −275, 276 \n IL-1α (pg/ml) 80.1 (208.3) 19.2(38.3) 0.450 −119, 240 \n IL-1β (pg/ml) 92.7 (120.3) 144.8 (293.0) 0.682 −348, 244 \n IL-2 (pg/ml) 11.8 (9.8) 11.3 (9.6) 0.742 −3.1, 4.2 \n IL-4 (pg/ml) 19.3 (19.1) 10.9 (7.2) 0.329 −10.5, 27.3 \n IL-6 (pg/ml) 292.8 (445.7) 359.9 (501.4) 0.492 −282, 149 \n IL-10 (pg/ml) 67.4 (106.8) 30.8 (51.2) 0.144 −16.1, 89.4 \n IFNγ (pg/ml) 13.9 (24.9) 4.9 (4.3) 0.342 −11.9, 29.9 \n MCP-1 (pg/ml) 232.7 (126.1) 266.8 (112.8) 0.469 −139, 71 \n HGF (pg/ml) 1345 (800) 1022 (681) 0.483 −711, 1357 Values are mean (+/− SD). Serum cytokines were determined as described in Methods . \n\nBODY.RESULTS.ADVERSE EVENTS:\nThere were no serious adverse events in either treatment group. Mild or moderate adverse events unrelated to the allocated treatment occurred in 20% of subjects in the Prednisone group and in 30% of the subjects in the Pioglitazone group. There were no significant changes from baseline during treatment in safety laboratory tests in either treatment group.\n\nBODY.DISCUSSION:\nIt was not known whether Pioglitazone and Prednisone affect serum HCV genotype 4 RNA titers independently of PEG-Interferon-α2/ribavirin treatment. This Study documented for the first time that Pioglitazone (30 mg daily) decreases the serum HCV RNA titers after a 14-day treatment in overweight subjects with treatment naïve, chronic HCV genotype 4 infections. The statistical differences between the mean at day-0 (baseline) and day-14 were highly significant whether the serum HCV RNA was analyzed as IU/ml or log10 values (\nTable 3\n). Pioglitazone treatment also decreased the HCV genotype 4 titers on day-4 and day-7, suggesting a rapid effect on HCV RNA titers. The comparable mean decrease in HCV RNA for the first week and for the second week indicates a continuing effect during the 14-day treatment. It remains to be studied whether the rate of decrease of serum HCV RNA titers continues after day-14 in the Pioglitazone group. If the effects of Pioglitazone on in HCV genotype 4 RNA titers would continue at the same rate as for the first 2 weeks, it would achieve a clinically significant decrease of ∼1 log10 in HCV RNA titers by week-12, an important predictive timeline for a sustained virological response in HCV treatments [16]. Given the relative safety of Pioglitazone administration [9],[17], it could be, as a co-adjuvant, a valuable therapeutic effect for difficult to treat HCV genotype 4-infected patients. This Study also documented for the first time that Prednisone (40 mg daily) increases the serum HCV RNA titers after a 4-day treatment of HCV genotype 4 overweighed patients (\nTable 7\n). Our results with Prednisone treatment are congruent with the recently reported increased HCV replication by corticosteroids in the Huh-7 replicon system [13], but whether in HCV genotype 4-infected patients, this reflects a direct corticosteroid effect on HCV replication, an indirect effect mediated by insulin resistance, or another mechanism is unknown. Because the Huh-7 replicon system uses de-differentiated, tumoral liver cells that have higher expression of glucose metabolism enzymes (glucose-6-phosphate 1-dehydrogenase and isocitrate dehydrogenase) and of the mitochondrial dicarboxylate carrier [18], it may not be suitable for the evaluation of interaction between glucose homeostasis and HCV replication. To minimize potential adverse events associated with the administration of Prednisone, its use was limited to 40 mg/day for a 4-day treatment, a relatively safe and frequent standard of care practice for some clinical conditions [19]. It remains to be determined whether a more prolonged corticosteroid treatment in HCV-infected patients has detrimental effects on RNA titers of different HCV genotypes. The subjects' recruitment number was pre-determined by the sample size and power analysis. An appropriate sample size was utilized in this study, which achieved the primary and secondary end-points with a relatively small cohort of patients. If the sample size of the study were unnecessarily large, then the risks to the subjects would have been excessive according to the CONSORT statement [20]. An intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication; all of these subjects completed the study and had a subsequent efficacy observation. Because these randomized arms were not treatments of the HCV infection in this Study, and because during chronic infection, the level of serum HCV RNA is in a steady state with only minor fluctuations in untreated patients [21], instead of using demanding placebo arms, we utilized randomization as well as de-identified samples to blinding the investigators analyzing the outcomes. These approaches minimize biases in executing the study and in analyzing the outcomes according to the CONSORT statement [20]. Triple combination therapy of Pioglitazone/PEG-Interferon-α2/ribavirin has been used for previously treated HCV genotype 1 infection unsuccessfully in Switzerland [10] and in the USA [22], [23] or for treatment naive HCV genotype 4 infection successfully in Egypt [11]. It is unknown whether the beneficial effects of Pioglitazone together with PEG-Interferon-α2/ribavirin are circumscribed to HCV genotype 4 or to treatment naïve HCV of various genotypes. Although, the relatively low 15 mg Pioglitazone dose used in the INSPIRED-HCV trial in Switzerland [10] could have been insufficient to achieve efficacy, higher Pioglitazone doses (30 mg or 45 mg per day) have been utilized in HCV genotype 1 infection trials together with PEG-Interferon-α2/ribavirin without improving the sustained virological response [22], [23]. These higher Pioglitazone doses (30 mg or 45 mg per day) have been successfully utilized to prevent the development of type 2 diabetes in patients with impaired glucose tolerance in the USA [8], in the treatment of HCV genotype 4 treatment naive patients together with PEG-Interferon-α2/ribavirin in Egypt [11] and in our Study. Further, the known effects of Pioglitazone and related PPARγ agonists on glucose homeostasis [8],[12] and as documented in our Study, and their effects on serum HCV RNA titers (\nTable 3\n) suggest, but does not prove, that improving insulin sensitivity lowers HCV RNA titers. This has been proposed by Harrison on the basis that patients with QUICK-I values <0.35 (equivalent to HOMA-IR>1.8) have significantly higher HCV titers than patients with QUICK-I values >0.35 [24]. Further, treating patients with hepatitis C genotype 1 and insulin resistance using metformin improves insulin sensitivity and increases SVR rate in patients who reached HOMA lower than 2 at week 24 of therapy and in women, in whom the therapy doubled the SVR rate [25]. The findings of our Study support the notion that sequential treatment with Pioglitazone and PEG-Interferon-α2/ribavirin rather than concomitant administration should be considered in the treatment of chronic HCV genotype 4 infections, as suggested by Serfaty and coworkers [26]. The beneficial effects of Pioglitazone treatment on glucose homeostasis have been well documented in patients with impaired glucose tolerance [8], and it was also reported in triple combination therapy of Pioglitazone/PEG-Interferon-α2/ribavirin in patients with HCV genotype 1 or 4 [10],[11]. In our Study, Pioglitazone treatment decreased by day-14 and Prednisone increased by day-4 fasting serum glucose as well as the 2-hr glucose after an OGTT (\nTable 4\n\n and \n\n8\n\n). However, when compared to baseline day-0 values, neither Pioglitazone treatment at day-14 nor Prednisone treatment at day-4 affected HOMA-IR; QUICK-I; and HOMA-B, surrogate indicators of insulin resistance and secretion (\nTables 4\n\n and \n\n8\n). It would probably require a more prolonged treatment with Pioglitazone and Prednisone to induce changes in these surrogate indicators of insulin resistance and secretion [26]. Elevated serum ALT and AST values are surrogate indicators of liver injury used in patient care and clinical research [9]. Pioglitazone treatment decreased at day-14 serum ALT levels and to a smaller degree serum AST levels (\nTable 5\n). Additional insights into the mechanisms of Pioglitazone on liver injury in chronic HCV infection would require assessment of liver biopsies before and after prolonged treatment since the effects on serum ALT were clinically modest at day-14. Prednisone treatment did not affect at day-4 serum ALT or AST levels (\nTable 9\n). Many experimental and clinical studies have shown that PPARγ signaling has anti-inflammatory properties in the liver [27]. Interestingly, HCV infection in Huh-7 cells and in humans inhibits PPARγ expression and function [27]. Two studies suggested therapeutic effects of bezafibrate, a non-selective agonist of PPAR α/δ/γ, during chronic HCV infection. Patients with HCV genotype 1 and 2 unresponsive to interferon therapy were treated for 8 weeks or 6 months with bezafibrate [28], [29]. At endpoint, HCV viral load (−0.5 log10) and ALT were statistically significantly decreased, suggesting anti-inflammatory and antiviral activities of the PPAR α/δ/γ signaling pathways in patients with chronic HCV infection. Given the non-selective nature of the PPAR α/δ/γ agonist, it is difficult to determine whether induction of PPARγ pathways occurred and played a role in the decreased HCV RNA. Pioglitazone also has shown improvement in liver injury and inflammation in patients with non-alcoholic steatohepatitis [9] and this has been attributed to the beneficial effects of Pioglitazone on insulin signaling [8]. An alternative explanation is a possible direct effect of Pioglitazone either as a PPARγ agonist or through other mechanisms on liver injury, independently of its insulin sensitizing actions [8]. Indeed, PPARγ agonists induce alternative, anti-inflammatory M2 macrophage activation [30], possibly linking insulin sensitivity with decreased liver injury in patients with HCV infection. At the molecular level, insulin resistance is promoted by a transition from an anti-inflammatory macrophage M2 state maintained by STAT6 and PPARγ to a pro-inflammatory macrophage M1 state driven by C/EBPβ and NF-κB activation and other transcription factors that play important roles in innate immunity [30]. Relevant to the management of HCV infected patients, liver fibrogenesis is expected to be ameliorated by PPARγ signaling [31] as well as by decreased inflammatory pathways arising from M2 macrophages [32] and resulting in decreased activation of hepatic stellate cells [33]. Pioglitazone increased serum IL-8 levels at day-14 when compared to baseline day-0 values (\nTable 6\n). Serum IL-8 values are elevated in HCV infection [34] and these values may reflect the effects of the HCV on il-8 gene transcription as well as its induction by TNFα [15]. We found that there was a relatively strong correlation between serum IL-8 and serum TNF-α levels for day-0 and day-14. Although it has been suggested that elevated levels of serum IL-8 may be associated with HCV resistance to Interferon-α2 therapy, the clinical or biological significance of the elevated serum IL-8 in HCV genotype 4 patients treated with Pioglitazone remains to be determined. Prednisone did not affect serum IL-8 levels at day-4 when compared to baseline day-0 values (\nTable 10\n). We found no differences between day-0 and day-14 in the Pioglitazone group and day-4 in the Prednisone group in other inflammatory markers such as serum IL-28B, TNF-α, IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, MCP-1 and HGF (\nTables 6\n\n and \n\n10\n). The lack of changes in IL-28B (IFN-γ3) is relevant since genetic variation in IL-28B affects the spontaneous clearance of HCV in patients [35]. Our study has some limitations: i] there were, as expected, clinically modest decreases in HCV RNA at day-14 in the Pioglitazone group; ii] the relatively small cohort could introduce potential biases; and iii] the findings may not be applicable for patients with HCV genotype 4 in other ethnic groups It remains to be elucidated whether the effects of Pioglitazone also occur in HCV genotype 4 infections in previously treated patients. The novel findings of our Study provide the foundation for basic and clinical investigations on the molecular mechanisms responsible for the Pioglitazone-induced decrease in HCV genotype 4 RNA titers. Future research should characterize whether the effects of Pioglitazone on HCV genotype 4 RNA titers are mediated indirectly by insulin signaling pathways or directly by affecting HCV genotype 4 hepatocyte entry and/or replication.\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1\nCONSORT Checklist. CONSORT 2010 checklist of information to include when reporting a randomised trial.(DOC)Click here for additional data file. Protocol S1\nTrial Protocol.\n(DOC)Click here for additional data file.\n\n**Question:** Compared to baseline what was the result of Pioglitazone (30 mg/day for 14 days) on serum IL-8 levels at day-14?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
1161
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Pioglitazone Decreases Hepatitis C Viral Load in Overweight, Treatment Naïve, Genotype 4 Infected-Patients: A Pilot Study\n\n ABSTRACT.BACKGROUND:\nInsulin resistance (IR) is induced by chronic hepatitis C virus (HCV) genotypes 1 and 4 infections. It is not known whether drugs that affect IR such as Pioglitazone and Prednisone also affect serum HCV RNA titers independently of PEG-Interferon-α2/ribavirin treatment. The primary aim was to assess whether Pioglitazone by improving IR and/or inflammation decreases HCV viral load independently of standard of care HCV treatment. A secondary aim was to assess whether Prednisone, a drug that induces insulin resistance and stimulates HCV viral entry and replication in replicon culture systems, increases HCV viral load in this population.\n\nABSTRACT.METHODOLOGY/PRINCIPAL FINDINGS:\nWe designed a two-arm, parallel Pilot Study of overweight, treatment naïve genotype 4 HCV-infected patients at a public referral Liver Clinic in Giza, Egypt. The subjects received Pioglitazone (30 mg/day for 14 days) or Prednisone (40 mg/day for 4 days) in a randomized fashion, but the two arms can be considered independent pilot studies. Only changes from baseline within each arm were assessed and no contrasts of the interventions were made, as this was not an aim of the study. Among 105 consecutive HCV genotype 4 patients, 39 were enrolled based on the optimal sample size and power analysis according to the CONSORT statement; 20 to the Pioglitazone group and 19 to the Prednisone group. Pioglitazone was effective in decreasing serum HCV RNA at day-14 (n = 10; difference of means = 205,618 IU/ml; 95% CI 26,600 to 384,600; P<0.001). Although Prednisone did increase serum HCV RNA at day-4 (n = 10; change from baseline = −42,786 IU/ml; 95% CI −85,500 to −15,700; P = 0.049), the log10 HCV RNA titers were statistically not different from baseline day-0.\n\nABSTRACT.CONCLUSION/SIGNIFICANCE:\nThis is the first documentation that Pioglitazone decreases the serum HCV RNA titers independently of PEG-Interferon-α2/ribavirin treatment. The novel findings of our Study provide the foundation for basic and clinical investigations on the molecular mechanisms responsible for the Pioglitazone-induced decrease in HCV genotype 4 RNA titers.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov NCT01157975\n\n\nBODY.INTRODUCTION:\nChronic hepatitis C viral (HCV) infection markedly increased the risk for type 2 diabetes in cohorts with recognized diabetes risk factors [1]. A meta-analysis also found that HCV infection is associated with excess type 2 diabetes risk [2]. Moreover, IR is a specific feature of chronic HCV genotypes 1 and 4 infections since its prevalence is ∼7-fold higher among these patients compared to chronic Hepatitis B-infected patients [3], and changes in IR occur among patients with chronic HCV infection during and after treatment of HCV with PEG-interferon-α and ribavirin [4]. In patients with genotypes 1, 2 and 3 increased IR was associated with decreased sustained virological response (SVR) [5]. Among patients infected with genotype 1, decreases in Homeostasis Model Assessment (HOMA)-IR correlated with obtaining an SVR and increases in HOMA-IR were observed in patients who relapsed [6]. Similarly, IR may correlate with the virological response to standard of care treatment of HCV genotype 4 patients in Egypt [7]. Because Pioglitazone, a peroxisome proliferator-activated receptor (PPAR)-γ agonist, improves insulin sensitivity [8] and it is also effective in controlling liver inflammation in subjects with non-alcoholic steatohepatitis (NASH) [9], it has been used as triple therapy in addition to PEG-interferon-α and ribavirin to treat HCV-infected patients. The study in HCV genotype 1 non-responders was terminated when none of the five patients responded to the triple therapy at week 12 [10]. In contrast, Pioglitazone improved virological response to PEG-interferon α-2b/ribavirin combination therapy in hepatitis C genotype 4 treatment naïve patients with insulin resistance [11]. However, the mechanisms by which Pioglitazone influenced SVR in treatment naïve patients with HCV genotype 4 infection remain unknown. It is unknown whether Pioglitazone modifies the bioavailability of PEG-interferon α-2b/ribavirin or directly or indirectly affects HCV infection and replication. Therefore, we designed a Pilot Study to assess whether Pioglitazone decreases HCV viral load in overweight HCV genotype 4, treatment naïve patients independently of standard of care PEG-interferon α-2b/ribavirin treatment. A secondary end-point was to analyze whether Prednisone, a drug that induces insulin resistance in susceptible patients [12] and stimulates HCV viral entry and replication in replicon culture systems [13], increases HCV viral load in this population. There were no data available on the effect of Pioglitazone or Prednisone on HCV RNA viral load in patients. Here, we provide the first documentation that Pioglitazone decreases the serum HCV genotype 4 RNA titers independently of treatment with PEG-Interferon-α2/ribavirin. Prednisone did increase serum HCV RNA viral load at day-4 in this cohort.\n\nBODY.METHODS:\nThe protocol for this trial and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1.\n\nBODY.METHODS.PARTICIPANTS:\nEligible participants were all overweight (BMI>25 kg/m2), given their susceptibility for IR, adults age 25 to 65 years, with active HCV genotype 4 chronic infection (>10,000 IU/ml viral load) who had a liver biopsy within two years of recruitment (added to the protocol), were treatment naïve (never treated for HCV infection) and met the eligibility criteria for the Study. Exclusion criteria included cirrhosis, the presence of other liver diseases, diabetes, pregnancy, significant clinical co-morbidities, and the inability to provide informed consent. Participants were recruited from July 2010 to February 2011 at the primary care referral of subjects of a disadvantaged social-economical group at the Agouza Hospital in Giza, Egypt. The estimated prevalence of Hepatitis C in Egypt is 12% in adults.\n\nBODY.METHODS.ETHICS:\nWritten informed consent was obtained by the medical staff before entry into the Study. The study was registered at the clinicaltrials.gov website (NCT-01157975) and received Human Subjects' Research approval at the University of California, San Diego (UCSD Human Research protections Program) and at the Agouza Hospital, Giza, Egypt (Director's Research Board) before starting the study. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki.\n\nBODY.METHODS.OBJECTIVES:\nBecause the objective of the Study was to determine whether Pioglitazone and Prednisone treatment would affect the HCV RNA viral load (decrease and increase it, respectively), and these were not a treatment of the HCV infection in this situation, we decided not to use a placebo since we utilized de-identified samples to blinding the investigators analyzing the outcomes.\n\nBODY.METHODS.INTERVENTIONS:\nEligible subjects were randomly assigned to receive orally Pioglitazone 30 mg/day for 14 days or Prednisone 40 mg/day for 4 days. These medications were taken in the morning. Visits were at screening (up to 30 days before randomization); randomization (day -2); baseline visit (day 0); and follow-up visits for the Pioglitazone group (on days 2, 4, 7 and 14) and the Prednisone group (on days 2 and 4). For safety issues subjects were continued to be seen off protocol at day-7 post-treatment. Blood samples were obtained at each visit for HCV RNA viral load and for inflammatory, adipogenic, and insulin resistance markers. In addition, a 2-hr oral 75 g glucose tolerance test (OGTT) was performed after a 10-hr fast at day-0 before administration of the medication and at the last day of treatment (day-14 for Pioglitazone and day-4 for Prednisone). On all other visits, patients were also fasting. Patients received the first dose of the study drug (oral Pioglitazone 30 mg or Prednisone 40 mg) after blood samples were obtained on day 0. Compliance with the medication was >90% as assessed at each visit.\n\nBODY.METHODS.OUTCOMES:\nThe primary outcome with respect to the effect of Pioglitazone in HCV genotype 4 infections was the decrease in HCV RNA viral load at day 14. The secondary outcome with respect to the effect of Prednisone in HCV genotype 4 infections was the increase in HCV RNA viral load at day 4 instead of at day 14 for safety reasons given the potential adverse events associated with longer corticosteroid treatment. Additional secondary outcome measures were insulin resistance (oral glucose tolerance test; serum insulin, C-peptide; HOMA-IR; insulin sensitivity check index (QUICKI); HOMA-B; sex hormone BG (SHBG); resistin; adiponectin) and liver injury (alanine aminotransferase (ALT); aspartate aminotransferase (AST)) and inflammation (serum interleukin (IL)-8, IL-28B, tumor necrosis factor (TNF), IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, monocyte chemokine protein (MCP) and hepatocyte growth factor (HGF)). De-identified blood samples from all the enrolled subjects that received the treatments were analyzed for the standard clinical laboratory tests and for HCV RNA viral load using PCR amplification (COBAS, TACKMAN) (lowest detectable limit: 50 IU/ml). A multiplex peptide detection system (Quansys Biosciences, Logan, UT) was utilized according to the manufacturer's protocol to determine inflammatory, adipogenic, and insulin resistance markers. Values calculated from individual pixels using the Q-View Imager system. An enzyme-linked immunosorbent assay kit was used to determine human adiponectin according to the manufacturer's protocol (BioVision, Mountain View, CA). Surrogate indicators of insulin resistance and β-cell function were analyzed as described previously [14]. The indicators of insulin resistance HOMA1-IR and QUICKI were determined. HOMA1-IR = (FSI×FSG)/22.5 where FSI is fasting serum insulin concentration (μU/l) and FSG is fasting serum glucose (mmol/l). QUICKI = 1/[log (fasting insulin-μU/ml)+log (fasting glucose- mg/dl)]. β-Cell function was assessed by the HOMA –B 20.I0/(G0 -3.5); I0 = fasting insulin in μU/ml and G0 = fasting glucose in mmol/l [14].\n\nBODY.METHODS.SAMPLE SIZE:\nThere were no data available on the effect of Pioglitazone or Prednisone on HCV RNA viral load. However, we considered a reduction of 25% in HCV RNA viral load in 14 days to be relevant as proof of principle of the effects of Pioglitazone on HCV RNA titers. To detect a reduction of 25% in HCV RNA in the Pioglitazone group at day-14 assuming a standard deviation of 20% with a two-sided 5% significance level and a power of 90%, a sample size of 9 patients per group completing the study was necessary, and it was decided to enroll at least 19 subjects per group given an anticipated dropout rate of 50% mainly prior to dosing. Because the cultural environment (unwillingness to tell the research staff that they cannot participate in the study) and the social conditions (such as severe problems with transportation and employer's authorization) were expected to prevent a significant proportion of patients to attend the Clinic for follow-up visits after randomization, a larger number of eligible subjects were randomized to achieve the desirable number of subjects completing the Study. To recruit this number of patients a 12-month inclusion period was anticipated. No interim analysis was performed during the trial. An intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication (10 patients in each group was optimal based on sample size and power analysis).\n\nBODY.METHODS.RANDOMIZATION—SEQUENCE GENERATION:\nParticipants were randomly assigned following simple randomization procedures (computerized random numbers) to one of two treatment groups. Independent pharmacists dispensed either Pioglitazone or Prednisone according to a computer generated randomization list.\n\nBODY.METHODS.RANDOMIZATION—ALLOCATION CONCEALMENT:\nThe decision to accept or reject a participant was made, and informed consent was obtained from the participant, in ignorance of the next assignment in the sequence.\n\nBODY.METHODS.RANDOMIZATION—IMPLEMENTATION:\nBlock randomization was by a computer generated random number list prepared by a pharmacist with no clinical involvement in the trial. After the research nurse had obtained the subject's consent, she contacted the pharmacist who was independent of the recruitment process for allocation consignment.\n\nBODY.METHODS.BLINDING:\nWhereas patients and physicians allocated to the intervention groups were aware of the allocated arm, outcome assessors and data analysts of primary and secondary end-points were kept blinded to the allocation.\n\nBODY.METHODS.STATISTICAL METHODS:\nAn intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication. The primary and secondary endpoints were assessed with a paired-sample t-test. We also explored correlations between IL8 and TNFα using Pearson's correlation coefficient with 95% confidence intervals. The significance level was fixed at α = 5% for all tests. All analyses were carried using R version 2.12.2 (2011 Vienna, Austria; http://www.R-project.org).\n\nBODY.RESULTS.PARTICIPANT FLOW:\nThe optimal subjects' recruitment number was pre-determined by sample size and power analysis as described above. One hundredth and five consecutive patients with chronic Hepatitis C genotype 4 referred to the Hepatology Clinic were assessed for eligibility (\nFig. 1\n). Among these 105 patients, 66 were excluded because they did not meet the eligibility criteria (42 subjects), they refused to participate (8 subjects), or for other reasons (16 subjects). Therefore, 39 patients were randomized; 20 to the Pioglitazone group and 19 to the Prednisone group. Ten patients in each group received the allocated intervention, while 10 patients in the Pioglitazone group and 9 patients in the Prednisone group were unable to participate because of their inability to attend the Clinic visits. These patients did not receive the allocated intervention (\nFig. 1\n). The randomized groups that received the allocated intervention were balanced in regards to age, gender, HCV genotype and titers and the severity of their liver disease (\nTable 1\n). 10.1371/journal.pone.0031516.g001Figure 1Participant Flow.Patients with chronic Hepatitis C genotype 4 referred to the Hepatology Clinic were assessed for eligibility as described in Methods. 10.1371/journal.pone.0031516.t001 Table 1 Baseline demographic and clinical characteristics of treated subjects. Pioglitazone (N = 10) Prednisone (N = 10) \n P value \n \n Age (years) 44 (9.3) 42(10.4) 0.592 \n Sex (male) 7 (70%) 8 (80%) 0.429 \n Ethnic origin: Arab \n 10 (100%) 10 (100%) 1.000 \n Height (cm) 173 (5.8) 175(8.7) 0.458 \n Weight (kg) 88.3 (9.2) 85.7(9.9) 0.550 \n BMI (kg/m 2 ) 29.5(1.5) 27.9(1.6) 0.027 \n HCV Genotype 4 \n 10 (100%) 10 (100%) 1.000 \n HCV RNA (log 10 ) 5.41(0.80) 5.10(0.92) 0.432 \n Albumin (g/dl) 3.8(0.3) 3.8(0.3) 0.939 \n Total bilirubin (mg/dl) 0.97(0.15) 0.90 (0.16) 0.319 \n ALT (IU/ml) 63 (23) 53 (14) 0.283 \n AST (IU/ml) 54 (17) 52 (11) 0.795 \n Metavir score: \n \n A (inflammation) A1(n:5) A2(n:5) A1(n:5) A2(n:5) 1.000 \n F (fibrosis) F1(n:7); F2(n:3) F1(n:6); F2(n:4) 0.639 Data are means (SD) or numbers (%). The patients that did receive the allocated intervention were similar in demographics, ethnicity and degree of liver disease to those that did not receive the allocated intervention (\nTable 2\n). By chance allocation, the serum ALT was lower in the group that received Prednisone than in the group that did not received the allocated Prednisone (53 vs 69 IU/ml; P = 0.012) (\nTables 1\n\nand\n\n2\n). 10.1371/journal.pone.0031516.t002 Table 2 Baseline demographic and clinical characteristics of non-treated subjects. Pioglitazone (N = 10) \n P value \n Prednisone (N = 9) \n P value \n \n Age (years) 39 (4.2) 0.139 43(11.2) 0.842 \n Sex (male) 6 (60%) 0.660 5 (55%) 0.277 \n Ethnic origin: Arab \n 10 (100%) 1.000 9 (100%) 1.000 \n Height (cm) 172 (5.9) 0.706 177(8.1) 0.612 \n Weight (kg) 85.7 (7.6) 0.499 84.8(8.8) 0.837 \n BMI (kg/m 2 ) 28.8(2.1) 0.402 28.2(3.7) 0.818 \n HCV Genotype 4 \n 10 (100%) 1.000 9 (100%) 1.000 \n ALT (IU/ml) 68 (10) 0.536 69 (10) 0.012 \n AST (IU/ml) 63 (12) 0.188 57 (8) 0.278 \n Metavir score: \n \n A (inflammation) A1(n:4); A2(n:6) 0.673 A1(n:4); A2(n:5) 0.821 \n F (fibrosis) F1(n:5); F2(n:5) 0.388 F1(n:5); F2(n:4) 0.855 Data are means (SD) or numbers (%). P values are for comparisons between treated and untreated subjects within the same arm. No participant was excluded after randomization because they were found not to meet eligibility criteria. All the patients allocated to the interventions (10 in each arm) completed the study (\nFig. 1\n).\n\nBODY.RESULTS.BASELINE DEMOGRAPHIC AND CLINICAL CHARACTERISTICS:\nAn intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication; all of these patients had a subsequent efficacy observation (10 patients in each group). As outlined in \nTable 1\n, all patients were middle age Egyptian Arabs, predominantly males that had HCV genotype 4 with mild or moderate inflammation (A1 and A2, respectively) and mild or moderate fibrosis (F1 and F2, respectively) on their liver biopsy. They had normal albumin and total bilirubin and mild increase in ALT/AST. All subjects were overweight and five were obese (body mass index, BMI; the weight in kg divided by the square of the height in meters was >29 kg/m2: 3 in the Pioglitazone group and 2 in the Prednisone group). By chance allocation, the BMI was higher in the group receiving Pioglitazone than in the group receiving Prednisone (29.5 vs 27.9 kg/m2; P = 0.027) (\nTable 1\n). There was no indication that significant chance imbalances had occurred between the two groups. Further, no direct analysis between the two groups was intended for the primary or secondary end points. Rather, a comparison between baseline and post-treatment results within each group was the Pilot Study's intent.\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON SERUM HCV RNA TITERS:\nPioglitazone decreased serum HCV RNA titers in overweight HCV genotype 4, treatment naïve patients. In the Pioglitazone-treated group, the log10 transformed HCV RNA titers were statistically different from baseline day-0 (5.41+/−0.80) on day-14 (5.24+/−0.79; P<0.001) and day-7 (5.30+/−0.78; P = 0.009) (\nTable 3\n), providing evidence that Pioglitazone was effective in decreasing HCV RNA in this cohort of patients. Pioglitazone treatment decreased the HCV RNA from baseline in 9 out of 10 subjects at day-7. The log10 HCV RNA values were also decreased by day-4 but they were statistically not different from baseline (5.38+/−0.78; P = 0.059). The log10 HCV RNA values were not affected at day-2 by Pioglitazone treatment. Further, the decrease in the log10 HCV RNA between day-7 and day-14 was highly significant (mean change = 0.063; 95% CI, 0.045; P<0.001) (\nTable 3\n). 10.1371/journal.pone.0031516.t003 Table 3 Effects of Pioglitazone on serum HCV RNA titers. Day HCV (IU/ml ×10 −3 ) \n P value \n \n 95% CI \n HCV log 10 \n \n P value \n \n 95% CI \n \n 0 \n 608.2(696.4) 5.41(0.80) \n 2 \n 586.9(663.9) 0.178 −11.5, 53.9 5.41(0.78) 0.854 −0.029 \n 4 \n 531.5(553.6) 0.151 −33.6, 186.9 5.38(0.78) 0.077 −0.002 \n 7 \n 455.3(504.4) 0.044 4.8, 300.7 5.30(0.78) 0.001 −0.156 \n 14 \n 402.5(452.4) 0.029 26.6, 384.6 5.24(0.79) <0.001 −0.215 Values are mean (SD) of HCV RNA titers determined as described in Methods . In spite of the variable baseline HCV RNA viral load, Pioglitazone also decreased significantly the absolute HCV RNA levels at day-14 (mean change = 205,618 IU/ml; P = 0.028) and at day-7 (mean change = 152,772 IU/ml; P = 0.044) (\nTable 3\n).\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON INDICES OF INSULIN RESISTANCE:\nPioglitazone treatment decreased by day-14 fasting serum glucose (Glucose-0) (mean change = 8.2 mg/dL; P = 0.024) as well as the 2-hr glucose (Glucose-120 min) after an OGTT (mean change = 18.8 mg/dL; P = 0.001) (\nTable 4\n). 10.1371/journal.pone.0031516.t004 Table 4 Effects of Pioglitazone on glucose homeostasis, insulin resistance and secretion. Test Day-0 Day-14 \n P value \n \n 95% CI \n \n Glucose-0 (mg/dL) 105 (11.4) 96.8 (6.8) 0.024 1.3, 15.1 \n Glucose-120 (mg/dL) 120.6 (11.9) 101.8 (7.4) 0.0004 10.9, 26.7 \n HOMA-IR \n 2.71 (1.61) 2.38 (1.60) 0.326 −0.95 \n QUICK-I \n 0.339 (0.031) 0.348 (0.036) 0.587 \n …… \n \n HOMA-B \n 92.65 (57.5) 115.26 (90.85) 0.52 −99.6, 54.4 \n Insulin (µIU/ml) 10.4 (6.2) 10.1 (7.0) 0.914 −6.2, 6.8 \n C-peptide (nmol/L) 6.4 (9.4) 9.6 (8.5) 0.278 \n − 9.5, 3.1 \n SHBG (µg/ml) 3.8 (2.4) 1.7 (2.1) 0.062 −140 \n Adiponectin (µg/ml) 3.19 (1.32) 3.64 (1.71) 0.512 \n ……‥ \n \n Resistin (ng/ml) 2,0 (0.2) 1,7 (0.6) 0.116 −2.0 Values are mean (+/− SD). Tests were performed as described in Methods . However, Pioglitazone treatment did not affect by day-14 the following surrogate indicators of insulin resistance and secretion: HOMA-IR; QUICK-I; and HOMA-B. In addition, other surrogate indicators of insulin resistance were not affected by Pioglitazone treatment on day-14, including fasting serum insulin; C-peptide; SHBG; resistin; and adiponectin (\nTable 4\n).\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON INDICATORS OF LIVER INJURY AND INFLAMMATION:\nElevated serum ALT and AST values are surrogate indicators of liver injury used in patient care and clinical research [9]. Pioglitazone treatment decreased at day-14 serum ALT levels (mean change = 7.5 IU/ml; 95% CI: 2.1 to 12.9; P = 0.012) and to a smaller degree serum AST levels (mean change = 2.6 IU/ml; 95% CI: −0.6 to 5.8; P = 0.096) (\nTable 5\n). 10.1371/journal.pone.0031516.t005 Table 5 Effects of Pioglitazone on liver injury indicators. Day ALT (IU/L) \n P value \n \n 95% CI \n AST (IU/L) \n P value \n \n 95% CI \n \n 0 \n 62.7 (23.1) 53.6 (15.6) \n 14 \n 55.2(18.7) 0.0118 2.1, 12.9 51.0 (14.0) 0.0963 −0.6, 5.8 Values are mean (+/− SD). ALT and AST were determined as described in Methods . It has been suggested that serum IL-8 values are elevated in HCV infection and that these values may reflect the effects of the HCV on il-8 gene transcription [15]. Pioglitazone increased serum IL-8 levels at day-14 when compared to baseline day-0 values (mean change = −166.8 IU/ml; 95% CI: −300.1 to −33.5; P = 0.019) (\nTable 6\n). Given the possible induction of IL-8 expression by TNF-α [15], we analyzed post-hoc their correlation by a Pearson's correlation coefficient. We found that there was a relatively strong correlation between serum IL-8 and serum TNF-α levels for day-0 and day-14 (n = 20; r = 0.64; 95% CI = 0.28 to 0.85; P = 0.002). 10.1371/journal.pone.0031516.t006 Table 6 Effects of Pioglitazine on serum cytokines. Serum Cytokine Day-0 Day-14 \n P value \n \n 95% CI \n \n IL-8 (pg/ml) 181.0 (182.8) 347.8 (123.9) 0.020 −300, −33.5 \n IL-28B (pg/ml) 837.2 (1,068) 1,597 (955.5) 0.060 −1,564, 47 \n TNF-α (pg/ml) 92.9 (139.6) 275.5 (268.5) 0.056 −371.7, 6.6 \n IL-1α (pg/ml) 136.6 (324.4) 223.6 (452.1) 0.415 −317, 143 \n IL-1β (pg/ml) 627.4 (1,046) 724.0 (1,098) 0.748 −760, 566 \n IL-2 (pg/ml) 13.9 (11.1) 16.1 (10.1) 0.632 −12.3, 7.9 \n IL-4 (pg/ml) 10.9 (4.9) 10.4 (4.3) 0.817 −4.8, 5.9 \n IL-6 (pg/ml) 337.9 (454.8) 560.5 (482.1) 0.280 −661, 216 \n IL-10 (pg/ml) 32.4 (58.2) 27.6 (44.6) 0.831 −44.7, 54.3 \n IFN-γ (pg/ml) 5.7(4.2) 7.4 (4.6) 0.403 −5.9, 2.6 \n MCP-1 (pg/ml) 338.4(151.4) 330.6(157.9) 0.915 155, 170 \n HGF (pg/ml) 1008 (736) 975 (603) 0.914 −665, 732 Values are mean (+/− SD). Serum cytokines were determined as described in Methods . We found no statistically significant differences between day-0 and day-14 in serum IL-28B, TNF-α, IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, MCP-1 and HGF (\nTable 6\n).\n\nBODY.RESULTS.EFFECT OF PREDNISONE ON SERUM HCV RNA:\nPrednisone increased the absolute HCV RNA level at day-2 (mean change = −35,636 IU/ml; 95% CI −64,665 to −6,607; P = 0.021) and at day-4 (mean change = −42,786 IU/ml; 95% CI −85,556 to −15.7; P = 0.049) (\nTable 7\n), providing some evidence that Prednisone may be effective in increasing HCV RNA in this cohort of patients for that short period of time. When compared to their baseline day-0 values, eight of 10 subjects had an increase in HCV RNA at day-4. Prednisone treatment also increased HCV RNA from baseline day-0 in 9 of 10 subjects at day-2 (\nTable 7\n). However, in the Prednisone-treated group, the HCV RNA titers expressed as log10 values on day-2 and day-4 were not statistically different from baseline day-0 (\nTable 7\n). 10.1371/journal.pone.0031516.t007 Table 7 Effects of Prednisone on serum HCV RNA titers. Day HCV (IU/ml ×10 −3 ) \n P value \n \n 95% CI \n HCV log 10 \n \n P value \n \n 95% CI \n \n 0 \n 498.0(805.3) 5.10(0.92) \n 2 \n 533.7(834.4) 0.0215 −64.6,−6.6 5.14(0.94) 0.059 0.08 \n 4 \n 540.8(855.2) 0.0499 −85.5,−15.7 5.13(0.92) 0.18 0.03 Values are mean (SD). HCV RNA titers were determined as described in Methods . \n\nBODY.RESULTS.EFFECT OF PREDNISONE ON INDICES OF INSULIN RESISTANCE:\nPrednisone treatment increased by day-4, when compared to baseline day-0 values, fasting serum glucose (Glucose-0) (mean change = 9.0 mg/dL; 95% CI: 20.4 to −2.4; P = 0.108) as well as the 2-hr (Glucose-120 min) after an OGTT (mean change = 11.0 mg/dL; 95% CI: 17.9 to 4.1; P = 0.006). Collectively, the CI and t-tests suggest that Prednisone increased serum glucose levels at day-4 (\nTable 8\n). However, Prednisone treatment did not affect by day-4 other surrogate indicators of insulin resistance and secretion (\nTable 8\n). 10.1371/journal.pone.0031516.t008 Table 8 Effects of Prednisone on glucose homeostasis, insulin resistance and secretion. Test Day-0 Day-4 \n P value \n \n 95% CI \n \n Glucose-0 (mg/dL) 88.0 (14.7) 97.0 (12.9) 0.107 −20.4, 2.4 \n Glucose-120 (mg/dL) 96.4 (10.5) 107.4 (12.3) 0.0055 −17.9, −4.1 \n HOMA-IR \n 1.73 (1.06) 2.55(1.96) 0.162 −1.06 \n QUICK-I \n 0.358(0.022) 0.345 (0.032) 0.275 \n HOMA-B \n 95.48 (49.35) 115.86 (95.96) 0.623 −20.38 \n Insulin (µIU/ml) 7.5(3.9) 10.5 (8.2) 0.374 −10.2, 4.4 \n C-peptide (nmol/L) 11.3 (8.9) 4.1 (1.3) 0.061 −0.5, 14.9 \n SHBG (µg/ml) 3.4(2.5) 2.7 (2.2) 0.591 −2.4 \n Adiponectin (µg/ml) 5.9 (8.2) 2.6 (2.1) 0.306 −4.5 \n Resistin (ng/ml) 3.4 (2.5) 2.7 (2.2) 0.591 −2.4 Values are mean (+/− SD). Tests were performed as described in Methods . \n\nBODY.RESULTS.EFFECT OF PREDNISONE ON INDICATORS OF LIVER INJURY AND INFLAMMATION:\nPrednisone treatment did not affect at day-4 either serum ALT, AST, IL-8 or other inflammatory mediators (\nTables 9\n\n and \n\n10\n). 10.1371/journal.pone.0031516.t009 Table 9 Effects of Prednisone on liver injury indicators. Day ALT (IU/L) \n P value \n \n 95% CI \n AST (IU/L) \n P value \n \n 95% CI \n \n 0 \n 53.2 (14.4) 52.0(11.3) \n 4 \n 55.7 (15.7) 0.080 5.4, −0.4 52.2 (12.8) 0.896 3.6, −3.2 Values are mean (+/− SD). ALT and AST were determined as described in Methods . 10.1371/journal.pone.0031516.t010 Table 10 Effects of Prednisone on serum cytokines. Serum Cytokine Day-0 Day-4 \n P value \n \n 95% CI \n \n IL-8 (pg/ml) 123.1 (163.5) 159.5 (185.1) 0.353 −123, 50 \n IL-28B (pg/ml) 485.9(876.4) 610.8 (857.7) 0.120 −292, 42 \n TNF-α (pg/ml) 146.6 (268.8) 146.1 (284.4) 0.996 −275, 276 \n IL-1α (pg/ml) 80.1 (208.3) 19.2(38.3) 0.450 −119, 240 \n IL-1β (pg/ml) 92.7 (120.3) 144.8 (293.0) 0.682 −348, 244 \n IL-2 (pg/ml) 11.8 (9.8) 11.3 (9.6) 0.742 −3.1, 4.2 \n IL-4 (pg/ml) 19.3 (19.1) 10.9 (7.2) 0.329 −10.5, 27.3 \n IL-6 (pg/ml) 292.8 (445.7) 359.9 (501.4) 0.492 −282, 149 \n IL-10 (pg/ml) 67.4 (106.8) 30.8 (51.2) 0.144 −16.1, 89.4 \n IFNγ (pg/ml) 13.9 (24.9) 4.9 (4.3) 0.342 −11.9, 29.9 \n MCP-1 (pg/ml) 232.7 (126.1) 266.8 (112.8) 0.469 −139, 71 \n HGF (pg/ml) 1345 (800) 1022 (681) 0.483 −711, 1357 Values are mean (+/− SD). Serum cytokines were determined as described in Methods . \n\nBODY.RESULTS.ADVERSE EVENTS:\nThere were no serious adverse events in either treatment group. Mild or moderate adverse events unrelated to the allocated treatment occurred in 20% of subjects in the Prednisone group and in 30% of the subjects in the Pioglitazone group. There were no significant changes from baseline during treatment in safety laboratory tests in either treatment group.\n\nBODY.DISCUSSION:\nIt was not known whether Pioglitazone and Prednisone affect serum HCV genotype 4 RNA titers independently of PEG-Interferon-α2/ribavirin treatment. This Study documented for the first time that Pioglitazone (30 mg daily) decreases the serum HCV RNA titers after a 14-day treatment in overweight subjects with treatment naïve, chronic HCV genotype 4 infections. The statistical differences between the mean at day-0 (baseline) and day-14 were highly significant whether the serum HCV RNA was analyzed as IU/ml or log10 values (\nTable 3\n). Pioglitazone treatment also decreased the HCV genotype 4 titers on day-4 and day-7, suggesting a rapid effect on HCV RNA titers. The comparable mean decrease in HCV RNA for the first week and for the second week indicates a continuing effect during the 14-day treatment. It remains to be studied whether the rate of decrease of serum HCV RNA titers continues after day-14 in the Pioglitazone group. If the effects of Pioglitazone on in HCV genotype 4 RNA titers would continue at the same rate as for the first 2 weeks, it would achieve a clinically significant decrease of ∼1 log10 in HCV RNA titers by week-12, an important predictive timeline for a sustained virological response in HCV treatments [16]. Given the relative safety of Pioglitazone administration [9],[17], it could be, as a co-adjuvant, a valuable therapeutic effect for difficult to treat HCV genotype 4-infected patients. This Study also documented for the first time that Prednisone (40 mg daily) increases the serum HCV RNA titers after a 4-day treatment of HCV genotype 4 overweighed patients (\nTable 7\n). Our results with Prednisone treatment are congruent with the recently reported increased HCV replication by corticosteroids in the Huh-7 replicon system [13], but whether in HCV genotype 4-infected patients, this reflects a direct corticosteroid effect on HCV replication, an indirect effect mediated by insulin resistance, or another mechanism is unknown. Because the Huh-7 replicon system uses de-differentiated, tumoral liver cells that have higher expression of glucose metabolism enzymes (glucose-6-phosphate 1-dehydrogenase and isocitrate dehydrogenase) and of the mitochondrial dicarboxylate carrier [18], it may not be suitable for the evaluation of interaction between glucose homeostasis and HCV replication. To minimize potential adverse events associated with the administration of Prednisone, its use was limited to 40 mg/day for a 4-day treatment, a relatively safe and frequent standard of care practice for some clinical conditions [19]. It remains to be determined whether a more prolonged corticosteroid treatment in HCV-infected patients has detrimental effects on RNA titers of different HCV genotypes. The subjects' recruitment number was pre-determined by the sample size and power analysis. An appropriate sample size was utilized in this study, which achieved the primary and secondary end-points with a relatively small cohort of patients. If the sample size of the study were unnecessarily large, then the risks to the subjects would have been excessive according to the CONSORT statement [20]. An intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication; all of these subjects completed the study and had a subsequent efficacy observation. Because these randomized arms were not treatments of the HCV infection in this Study, and because during chronic infection, the level of serum HCV RNA is in a steady state with only minor fluctuations in untreated patients [21], instead of using demanding placebo arms, we utilized randomization as well as de-identified samples to blinding the investigators analyzing the outcomes. These approaches minimize biases in executing the study and in analyzing the outcomes according to the CONSORT statement [20]. Triple combination therapy of Pioglitazone/PEG-Interferon-α2/ribavirin has been used for previously treated HCV genotype 1 infection unsuccessfully in Switzerland [10] and in the USA [22], [23] or for treatment naive HCV genotype 4 infection successfully in Egypt [11]. It is unknown whether the beneficial effects of Pioglitazone together with PEG-Interferon-α2/ribavirin are circumscribed to HCV genotype 4 or to treatment naïve HCV of various genotypes. Although, the relatively low 15 mg Pioglitazone dose used in the INSPIRED-HCV trial in Switzerland [10] could have been insufficient to achieve efficacy, higher Pioglitazone doses (30 mg or 45 mg per day) have been utilized in HCV genotype 1 infection trials together with PEG-Interferon-α2/ribavirin without improving the sustained virological response [22], [23]. These higher Pioglitazone doses (30 mg or 45 mg per day) have been successfully utilized to prevent the development of type 2 diabetes in patients with impaired glucose tolerance in the USA [8], in the treatment of HCV genotype 4 treatment naive patients together with PEG-Interferon-α2/ribavirin in Egypt [11] and in our Study. Further, the known effects of Pioglitazone and related PPARγ agonists on glucose homeostasis [8],[12] and as documented in our Study, and their effects on serum HCV RNA titers (\nTable 3\n) suggest, but does not prove, that improving insulin sensitivity lowers HCV RNA titers. This has been proposed by Harrison on the basis that patients with QUICK-I values <0.35 (equivalent to HOMA-IR>1.8) have significantly higher HCV titers than patients with QUICK-I values >0.35 [24]. Further, treating patients with hepatitis C genotype 1 and insulin resistance using metformin improves insulin sensitivity and increases SVR rate in patients who reached HOMA lower than 2 at week 24 of therapy and in women, in whom the therapy doubled the SVR rate [25]. The findings of our Study support the notion that sequential treatment with Pioglitazone and PEG-Interferon-α2/ribavirin rather than concomitant administration should be considered in the treatment of chronic HCV genotype 4 infections, as suggested by Serfaty and coworkers [26]. The beneficial effects of Pioglitazone treatment on glucose homeostasis have been well documented in patients with impaired glucose tolerance [8], and it was also reported in triple combination therapy of Pioglitazone/PEG-Interferon-α2/ribavirin in patients with HCV genotype 1 or 4 [10],[11]. In our Study, Pioglitazone treatment decreased by day-14 and Prednisone increased by day-4 fasting serum glucose as well as the 2-hr glucose after an OGTT (\nTable 4\n\n and \n\n8\n\n). However, when compared to baseline day-0 values, neither Pioglitazone treatment at day-14 nor Prednisone treatment at day-4 affected HOMA-IR; QUICK-I; and HOMA-B, surrogate indicators of insulin resistance and secretion (\nTables 4\n\n and \n\n8\n). It would probably require a more prolonged treatment with Pioglitazone and Prednisone to induce changes in these surrogate indicators of insulin resistance and secretion [26]. Elevated serum ALT and AST values are surrogate indicators of liver injury used in patient care and clinical research [9]. Pioglitazone treatment decreased at day-14 serum ALT levels and to a smaller degree serum AST levels (\nTable 5\n). Additional insights into the mechanisms of Pioglitazone on liver injury in chronic HCV infection would require assessment of liver biopsies before and after prolonged treatment since the effects on serum ALT were clinically modest at day-14. Prednisone treatment did not affect at day-4 serum ALT or AST levels (\nTable 9\n). Many experimental and clinical studies have shown that PPARγ signaling has anti-inflammatory properties in the liver [27]. Interestingly, HCV infection in Huh-7 cells and in humans inhibits PPARγ expression and function [27]. Two studies suggested therapeutic effects of bezafibrate, a non-selective agonist of PPAR α/δ/γ, during chronic HCV infection. Patients with HCV genotype 1 and 2 unresponsive to interferon therapy were treated for 8 weeks or 6 months with bezafibrate [28], [29]. At endpoint, HCV viral load (−0.5 log10) and ALT were statistically significantly decreased, suggesting anti-inflammatory and antiviral activities of the PPAR α/δ/γ signaling pathways in patients with chronic HCV infection. Given the non-selective nature of the PPAR α/δ/γ agonist, it is difficult to determine whether induction of PPARγ pathways occurred and played a role in the decreased HCV RNA. Pioglitazone also has shown improvement in liver injury and inflammation in patients with non-alcoholic steatohepatitis [9] and this has been attributed to the beneficial effects of Pioglitazone on insulin signaling [8]. An alternative explanation is a possible direct effect of Pioglitazone either as a PPARγ agonist or through other mechanisms on liver injury, independently of its insulin sensitizing actions [8]. Indeed, PPARγ agonists induce alternative, anti-inflammatory M2 macrophage activation [30], possibly linking insulin sensitivity with decreased liver injury in patients with HCV infection. At the molecular level, insulin resistance is promoted by a transition from an anti-inflammatory macrophage M2 state maintained by STAT6 and PPARγ to a pro-inflammatory macrophage M1 state driven by C/EBPβ and NF-κB activation and other transcription factors that play important roles in innate immunity [30]. Relevant to the management of HCV infected patients, liver fibrogenesis is expected to be ameliorated by PPARγ signaling [31] as well as by decreased inflammatory pathways arising from M2 macrophages [32] and resulting in decreased activation of hepatic stellate cells [33]. Pioglitazone increased serum IL-8 levels at day-14 when compared to baseline day-0 values (\nTable 6\n). Serum IL-8 values are elevated in HCV infection [34] and these values may reflect the effects of the HCV on il-8 gene transcription as well as its induction by TNFα [15]. We found that there was a relatively strong correlation between serum IL-8 and serum TNF-α levels for day-0 and day-14. Although it has been suggested that elevated levels of serum IL-8 may be associated with HCV resistance to Interferon-α2 therapy, the clinical or biological significance of the elevated serum IL-8 in HCV genotype 4 patients treated with Pioglitazone remains to be determined. Prednisone did not affect serum IL-8 levels at day-4 when compared to baseline day-0 values (\nTable 10\n). We found no differences between day-0 and day-14 in the Pioglitazone group and day-4 in the Prednisone group in other inflammatory markers such as serum IL-28B, TNF-α, IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, MCP-1 and HGF (\nTables 6\n\n and \n\n10\n). The lack of changes in IL-28B (IFN-γ3) is relevant since genetic variation in IL-28B affects the spontaneous clearance of HCV in patients [35]. Our study has some limitations: i] there were, as expected, clinically modest decreases in HCV RNA at day-14 in the Pioglitazone group; ii] the relatively small cohort could introduce potential biases; and iii] the findings may not be applicable for patients with HCV genotype 4 in other ethnic groups It remains to be elucidated whether the effects of Pioglitazone also occur in HCV genotype 4 infections in previously treated patients. The novel findings of our Study provide the foundation for basic and clinical investigations on the molecular mechanisms responsible for the Pioglitazone-induced decrease in HCV genotype 4 RNA titers. Future research should characterize whether the effects of Pioglitazone on HCV genotype 4 RNA titers are mediated indirectly by insulin signaling pathways or directly by affecting HCV genotype 4 hepatocyte entry and/or replication.\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1\nCONSORT Checklist. CONSORT 2010 checklist of information to include when reporting a randomised trial.(DOC)Click here for additional data file. Protocol S1\nTrial Protocol.\n(DOC)Click here for additional data file.\n\n**Question:** Compared to baseline what was the result of Pioglitazone (30 mg/day for 14 days) on day-14 in serum IL-28B, TNF-?, IL-1?, IL-1?, IL-6, IL-8, IL-10, IFN-?, MCP-1 and HGF?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
1034
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How do patients with severe mental diagnosis cope in everyday life - a qualitative study comparing patients’ experiences of self-referral inpatient treatment with treatment as usual?\n\n ABSTRACT.BACKGROUND:\nSeveral hospitals in Norway provide short self-referral inpatient treatment to patients with severe mental diagnosis. No studies have compared the experiences of patients who have had the opportunity to self-refer to inpatient treatment with patients who have received treatment as usual. This qualitative study was nested within a randomised controlled trial investigating the effect of self-referral to inpatient treatment. The aim was to explore how patients with severe mental diagnosis coped four months after signing a contract for self-referral, as compared to patients receiving treatment as usual.\n\nABSTRACT.METHODS:\nData was collected using qualitative individual interviews with patients with severe mental diagnosis, conducted four months after being randomised either to a contract for self-referral (intervention group) or to treatment as usual (control group).\n\nABSTRACT.RESULTS:\nTwenty-five patients participated in interviews - 11 from the intervention group and 14 from the control group. Results four months after randomisation showed that patients with a contract for self-referral appeared to have more confidence in strategies to cope with mental illness and to apply more active cognitive strategies. Patients with a contract also expressed less resignation, hopelessness and powerlessness than patients without a contract. In addition, patients with a contract seemed to be closer to the ideal of living a \"normal\" life and being a \"normal\" person.\n\nABSTRACT.CONCLUSION:\nThe results indicate that the patients who had a contract for self-referral had come further in the recovery process and should possibly be better off during treatment.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/1472-6963-14-347) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nMental health services have changed considerably during the last decades. In Norway, an increasing number of mental health service users live outside institutions and receive health care from community health services, ambulatory service teams or outpatient treatment in community mental health centers\n[1]. The focus has also shifted from perceiving severe mental illness as a permanent diagnosis towards focusing on what facilitates full or partial recovery\n[2]. Recovery is described as a process where the illness's negative impact is minimized by focusing on the patient's strengths and interests\n[3]. This shift has also moved the delivery of mental health services towards less institutional treatment and more integration in the patient's community\n[4]. Ambulatory health services and outpatient treatment are considered to be important and cost-efficient means of strengthening the service user's independence and responsibility for the treatment\n[4]. Outreach mental health services via community teams are widespread in the US and in Europe, especially for severe and chronic mental problems such as psychosis and schizophrenia\n[5], and are also provided in crisis\n[6]. It has been argued that institutional hospitalization should be reduced, and that when inpatient treatment is necessary, restraints should be minimized and patient control and self-determination strengthened\n[4]. Health service users have described that services at home, for instance from ambulatory acute treatment teams, have increased feelings of being in control and being seen and heard, making the users feel more responsible for the treatment\n[7]. Several qualitative studies have shown that the experience of control, autonomy and power to make choices is connected to improved quality of life\n[8], thereby underlining the importance of such experiences. Although crisis resolution is increasingly provided in the service user's home\n[6], periodically inpatient treatment is still necessary for patients with severe mental illness\n[9]. Mental health service users have emphasized that services should be flexible and provide safe and predictable care in phases with strong symptoms, as well as facilitating the user's responsibility and ability to cope in phases with fewer symptoms\n[10]. This emphasis suggests a combination of services such as home-based care to strengthen coping in good phases, and, periodically, inpatient care in phases with more symptoms. Even when inpatient treatment is necessary, it is essential to encourage and maintain the patient's control and responsibility. One way to achieve this is to give patients the opportunity to self-refer to inpatient treatment. The intention behind self-referral inpatient treatment is, in addition to reduce health service costs, to empower the service user and thereby improve symptoms as well as increase responsibility, coping, and quality of life. Giving service users the opportunity to self-refer provides the users with decision-making power during severe mental illness, and should therefore transfer responsibility and power to the service users. Several mental health hospitals in Norway provide self-referral to short inpatient treatment for some service users\n[11–14]. Patients can refer themselves to short inpatient treatment when they feel that this is necessary. Such hospitalization is meant to be in line with the service user's wishes and needs, and the service user is considered the best judge of when inpatient treatment is necessary. The proposed effects of self-referral inpatient treatment include reduced numbers of inpatient hospitalizations, fewer acute hospitalizations in crisis, more appropriate use of inpatient treatment, improved quality of life, and improved coping\n[11–14]. So far, there is no robust evidence for these effects. Internal evaluations have shown that the total amount of inpatient treatment can be reduced when service users have the opportunity to self-refer. One of these evaluations (published in Norway) indicated that the opportunity to self-refer for up to five days of inpatient treatment increased the total frequency of hospitalizations but decreased the number of hospitalization days by over 30%. In addition, the total amount of coerced hospitalizations was reduced by 50% six months after signing the contract\n[13]. Service users have also reported satisfaction with the opportunity to self-refer and have emphasized that services became more predictable, increased the feeling of safety and reduced symptoms\n[11]. Moreover, patients have reported that self-referral gave them more choices and a stronger feeling of control and autonomy\n[11, 12] A randomised controlled study is currently investigating the effect of self-referral inpatient treatment in a community health center in Mid-Norway. This paper describes a qualitative study nested into this randomised controlled trial. The aim was to explore the patients' experiences with the intervention. It is reasonable to believe that self-referral to inpatient treatment will influence the patients' experience of control and autonomy. Exploring the patients' experiences would provide valuable knowledge about a new and growing treatment approach in mental health services. No studies thus far have compared the experiences of patients who have the opportunity to self-refer with patients who receive treatment as usual.\n\nBODY.BACKGROUND.AIM:\nThe aim of the study was to compare the experiences of service users who had signed a contract for self-referral to inpatient treatment with the experiences of users who received treatment as usual, four months after randomization.\n\nBODY.METHODS.DESIGN AND SETTING:\nThis was a qualitative interview study involving patients at a community mental health center in Mid-Norway. The study was nested into a randomised controlled trial (RCT) investigating the effect of a contract for self-referral inpatient treatment on the numbers of hospitalizations and hospitalization days, mental health symptoms and functioning, quality of life and coping (trial registration no. NCT01133587). Patients randomised to the intervention group in the RCT signed a contract with the hospital. This contract gave the patients authority to refer themselves to up to five days of inpatient hospitalization when they felt that such treatment was necessary. In addition to any hospitalizations initiated by health personnel, patients with a contract could thus admit themselves without contacting their doctor, duty doctor or emergency department. The contract was an internal written agreement between the patient and the mental health center and had legal status as such. The contract was an addition to the regular treatment offered and did not replace the patients' regular treatment plan. Patients randomised to the control group received treatment as usual and were offered a contract for self-referral after taking part in the RCT.\n\nBODY.METHODS.PARTICIPANTS AND RECRUITMENT:\nInclusion criteria for the RCT were a diagnosis of psychosis or bipolar disorders with or without abuse problems. All participants were in need of long-term treatment in the hospital unit and the psychiatric community service. Health professionals with treatment responsibilities suggested participants for the randomized trial. All participants in the RCT were asked to participate in individual qualitative interviews.\n\nBODY.METHODS.DATA COLLECTION:\nTwenty-five patients were interviewed four months after randomisation - 11 in the intervention group and 14 in the control group. The interviews were conducted between November 2010 and December 2012, either in the hospital or in the patient's home. Three of the authors (MR, IEOM and GHE) conducted the interviews. All are psychiatric nurses and have extensive clinical experience. A research associate with extensive user experience participated in some of the interviews as well. The interviews lasted between 20 and 60 minutes. Themes in the interviews were taken from a pre-defined, semi-structured interview guide with open-ended questions (Additional file\n1). The main questions in the interviews related to how the patients experienced their current mental health problems and everyday life. Typical questions were: \"How is your life at present?\" and \"How is it for you when you are feeling ill?\" All participants were invited to elaborate on what they did to prevent symptoms and maintain stability, on their experiences of previous and current symptoms and on current or previous experiences with mental health services. To avoid influencing the participants during the RCT and to ask the same questions of patients in both groups, no participants were asked directly about their experiences with the interventions. The interview questions were therefore general and focused on the participants' overall situation at the time. The interventions thus became topics in the interviews only if the participants brought it up themselves. All interviews were transcribed verbatim, although redundant words and pauses were deleted and local dialect was translated to written language. The quotes used in the result section were translated into English by the first author (MBR) and verified by the rest of the author group.\n\nBODY.METHODS.ANALYSIS:\nThe analysis was conducted by a group consisting of a clinical psychologist, a researcher with extensive user experience, three psychiatric nurses, and an academic researcher on public health. The aim of this study emerged during an initial reading of the transcripts with a different research question in mind and with the intention to analyze the interviews across group allocation. Since there seemed to be several differences between the descriptions of the participants in the two groups, we chose to pursue these differences further. The authors were at that point already aware of the participants' group allocation, and we did not add new or \"blinded\" authors to the group. The interviews with patients from the control group (treatment as usual) were read and coded first, from which a list of codes (meaning units) and main themes were made. The coding process was conducted during several meetings in the author group. Then, the interviews with the patients from the intervention group (contract for self-referral) were coded. The code list and main themes from these interviews were compared with the codes and themes from the control patients. The main themes common for both groups were: Main themes from interviews: Experiences of disease and symptoms Strategies of copingShieldingStructure (food, sleep, medication)DistractionsSocial lifeThinking strategies Identity as a person with an illness Feeling safe Relationships (family, friends) Work and studies Dignity and power Values in life A normal life Four differences between the intervention group and the control group were found, which were then outlined and discussed in the author group. After the group analysis and discussion the first author (MBR) recoded all interviews and cross-checked the results with the transcripts to challenge the findings and look for different interpretations. The process confirmed the findings from the group's analysis. Quotes from the interviews were chosen to illustrate and substantiate the results. Quotes used in the result presentation are marked with the speaker's gender and study group (intervention or control). The final results were discussed and approved by the author group during the writing process.\n\nBODY.METHODS.ETHICS:\nThe regional committee for medical ethics in Central Norway approved of the study, which was registered with the Norwegian Data Inspectorate. All participants were given verbal and written information about the study, and signed a consent form before taking part in interviews. All participants were given the opportunity to make contact with the staff at the department after the interview.\n\nBODY.RESULTS:\nTwenty-five patients participated in this study, 11 had a contract for self-referral to treatment (intervention group) and 14 received treatment as usual (control group). Nine of the participants were female. The mean age was 41 years (range 21-60 years). The participants had one or several diagnoses, including psychosis, bipolar disorders and substance use. Eight of the participants had a job or were studying. The sample is described in Table \n1.Table 1\nCharacteristics of participants\nVariablesTotal sampleIntervention (contract for self-referral)Control (treatment as usual)No. of participants251114GenderFemales963Age (mean (range))41 (21-60)44 (31- 58)33 (21-60)Diagnosis (some have more than one)Psychosis (ICD-10; F 20-29)19712Bipolar disorders (ICD-10; F 30-39)642Substance use (ICD-10; F 10-19)633Others (ICD-10; F 60-61)211No. of years since receiving diagnosis (mean (range))9 (2-23)10 (4-19)8 (2-23)EmploymentPaid work321Disability benefits1477Sick leave3-3StudiesStudies or serious plans to study523 The results present four aspects where the experiences of patients with a contract for self-referral appeared different than the experiences of patients who received treatment as usual. Patients with a contract expressed stronger confidence in strategies to cope with mental illness and seemed to apply more active cognitive strategies. Patients with a contract also had fewer expressions of resignation, hopelessness and powerlessness than patients who received treatment as usual. In addition, patients with a contract appeared to be closer to the ideal of living a \"normal\" life and being a \"normal\" person. The differences between the groups are described in light of what were common for all participants.\n\nBODY.RESULTS.STRONGER CONFIDENCE IN COPING STRATEGIES:\nAll participants described several strategies to prevent and cope with symptom aggravation. They emphasized the importance of living a structured life with regular meals, sleep and medication, as well as avoiding smoking, excessive alcohol and drugs. Participants also talked about strategies to shield and distract themselves. Common shielding strategies were to be quiet, to be alone, to relax, and to avoid an excess of impressions, such as too many people or TV programs. Common distraction strategies were to watch TV, play computer games, read books, exercise and work, all of which could help curb anxiety, voices and visions. The participants also described that being with people or animals could both act as distraction, and provide meaningful relationships and experiences. These interactions could help to prevent and diminish symptoms. Overall, the participants described that a balance between activity and rest was necessary to cope with the mental illness. P: I have discovered that the more I talk to other people... the more stable I am. It is a sort of self-treatment or self-medication. To actively seek contact with other people every day. Some days, when I can't get in touch with anybody I walk down to the city and look at people. I do that because I don't want to be alone. That's a bad idea. Other days I have a strong need to be alone. Then I withdraw, turn off the phone and don't meet a single person during the whole day. I sit all day and watch TV-series or DVDs or play Playstation games or read a book. And these days come and go. There is no pattern – some days are like this and some days are like that.[...] I: So books help you?P: Yes, it is good to have something where things happen. [...] When I read I disappear a little. I disappear a little from this world. I just dream myself into the book. It is very relaxing. And it is a good way to ignore the things I don't want to think about... things that are stressful to think about.(Male, Control group) Although all participants identified coping strategies, there were some differences between the ways patients with and without a contract for self-referral described them. Patients with a contract voiced stronger self-confidence in applying coping strategies, both to prevent symptoms and to handle symptom aggravation. These patients also expressed greater confidence in keeping a good balance between activity and rest. Their descriptions appeared to include more experiences in applying and mastering various coping strategies. While patients without a contract described fairly common strategies to cope and prevent symptoms, patients with a contract appeared to have stronger and more independent creativity when describing how they prevented and coped with poorer phases. Some of the patients with a contract also described new and different ways to cope with their illness. Some of these strategies contradicted the advice usually given by professionals, such as preferring to be alone instead of having an extended social life. One patient, for example, expressed a deep satisfaction with having a certain distance from other people: P: I like to eat alone. I do. I have discovered that I like to do some things on my own. [...]I have two friends at present and we have a good distance... a good distance. We text each other and meet maybe once a week, and I really appreciate that.I: The friendship?P: Yes, the distance when we meet so rarely. We are in touch on the phone... to state that we are fine. [...] I read somewhere that friendship is what you think about the other person [...] having a presence even though they are not present.(Male, Intervention group)\n\nBODY.RESULTS.COGNITIVE STRATEGIES TO INFLUENCE THINKING PATTERNS:\nAll participants reported that they experienced anxiety, mood swings and negative thoughts. Almost all participants also talked about disturbing voices and visions, describing cognitive techniques to cope with these symptoms. In addition, all patients relayed that their thinking was associated with control over their situation. Despite these commonalities, comparison showed that the participants used these techniques differently. Patients without a contract talked mostly about cognitive strategies as a means of adapting to the illness and helping them to do their best. They used words such as \"observing\", \"understanding\", and \"recognizing\" symptoms and emphasized accepting the illness. Their main focus centered on being conscious about their thoughts and views and on achieving more knowledge about the nature of the symptoms in order to decrease anxiety when those symptoms rose. One participant described cognitive strategies such as observing and recording the physical surroundings during strong anxiety and claustrophobia: P: I am supposed to observe and notice details in rooms and such... when these things happen. I don't know if it has helped so much. It has become worse and worse. And the feeling is very strong when I am in the midst of it. I almost have to vomit. It affects me physically.(Female, Control group) Meanwhile, patients with a contract talked less about recognizing, understanding and accepting. Instead, they used words such as \"power\", \"control\" and \"strength\". They also displayed stronger ability to influence their thinking patterns through the use of cognitive strategies. Furthermore, these patients talked more about being active and working with themselves in order to grow, to break barriers and to take opportunities. Some of them questioned if the voices and views were true, or whether there were other ways to interpret and understand them: P: [...] I used to think that everybody knew more than me about certain things. I don't anymore. [...] It's just confusing... and that's a difficult state. You become unsure of yourself and that's heavy. It's not that I know so much, but what do other people know? [...] I read for instance in a book about what happens after death. And I said to myself: How can he [the author] know more about that than me? Has he been there - or is he sitting on this side and imagining what happens afterwards? These simple things... I use tactics like these. Then it is not so bad. [...]It still annoys me when someone comes and tells me I can never get rid of it [the illness]. It will probably never be completely over, but... [...] There are limits to what I don't know [laughs]. It is important to believe in the experiences you have. Others might have similar or opposite experiences, but they are not necessarily stronger or more correct.(Male, Intervention group) The results also indicated that patients with a contract had more experience and more control over the cognitive techniques, as well as a deeper understanding of the relationship between thoughts and feelings. They seemed to be more focused on the possibility of choosing cognitive strategies to influence their thinking. While patients without a contract described some efforts to manage being in a painful situation, patients with a contract presented a wider variety of techniques to influence their mental state. This enabled them to change their ways of thinking and thus influenced their overall situation. P: Nowadays, I read a lot of self-help books. I used to be very skeptical... they had very simple solutions, but now I find them developing.I: What do you get from them?P: I guess they speak to my own problems. You work with yourself while you read. For example to shut out the past... that is described in all self-help books. To have bulkheads forwards and backwards... To think less about worries for the future and self-reproach about the past. That resonates with me... and then the brain works with it while I read.(Male, Intervention group)\n\nBODY.RESULTS.RESIGNATION AND POWERLESSNESS:\nAll participants talked about the experience of being ill. Most of them described experiences of psychosis, explaining how emotions were experienced as bodily phenomena and how the symptoms were placed in their lives. Most of the participants also talked about negative and painful experiences, both with previous and current illness and with previous and current treatment. There seemed, however, to be a difference in the way that participants in the two groups talked about these experiences. Patients in the control group expressed more resignation and passivity, giving the impression that they were stuck in these experiences. They used phrases such as \"yielding\", \"enduring\", \"resigning\", \"giving in\" and \"getting used to\". Their descriptions included being powerless when facing their lives and problems, a stronger need for care and being more dependent on health services. These patients gave several examples of experiencing hopelessness and powerlessness. Although there were examples of interviews where feelings of helplessness and resignation were less prominent, the following examples represent very common descriptions for the control group participants: P: I have accepted the situation I am in. I think I have resigned a little. I have withdrawn from the things that I will never experience.I: Like what?P: Well... [sighs]... my life situation, for instance. I have for instance not had a partner in years... and it is quite stressful because you get very lonely. And after some time you come to a place where you realize that the hopelessness has come to stay.(Male, Control group) Patients who had a contract for self-referral, on the other hand, discussed courage and strength and talked about conscious choices and opinions. Their descriptions reflected being in a better place in their lives, and they seemed more active and less dependent on others. These patients expressed that they had influence over their overall situation. P: Well... how do you move on? It is to work... work with yourself. To locate the problems. Some would say to accept the problems, but that sounds a bit defensive to me. Maybe you get there... when you work with your problems they become more distant. You have to learn to like yourself until you get so sure that you are not put off by any little comment... like I used to be. It was... I had a word that emerged in my head during the last hospitalization: \"Show off\". That I was a show off. Like I knew it all, and wanted to have a piece of everyone else. But then I asked a friend of mine who was admitted during the same period: Do you think I am a show off? And she laughed a little. That is a way to get your problems more distant. Is it really like this? Is it possible? To be confident... to be sure... that you throw it around instead.(Male, Intervention group) There were also a few examples of patients with a contract who seemed to have achieved some distance from their previous bad experiences, as if they had success in moving on with their lives. P: And time heals all wounds. That's how it is. That's hard to say when you are ill, but when you are recovering you see that it is a one way street. Even though you might be temporarily dumped or feel bad and upset.I: Like a crisis?P: Yes, the crisis lasts around two hours. It's like that. I don't know how other people feel, but when I am ill I feel it could last forever. I am scared that it will be a permanent death row, but it doesn't. It takes shorter and shorter time before I am well again. And that is reassuring.[...] The problems I have aren't smaller or steeper, only shorter. It feels like I can put them behind me. Previously I could have days and weeks with the same problems and feel thrown around like a ball. Now it takes three or four hours.(Male, Intervention group)\n\nBODY.RESULTS.BEING NORMAL AND LIVING A NORMAL LIFE:\nParticipants in both groups talked about the importance of having a normal life, doing normal things, and feeling like everyone else. A common theme was being identified with the illness, and many emphasized that being ill was not their sole identity. The patients reflected on their own status as persons with mental illness, expressing a desire for an identity beyond the role of a patient. P: I have a diagnosis, and that makes me ill – in medical terms. But it is a difference between that and perceiving yourself as an ill person. Cause I am only a guy. I'm just me.[...] I would like to build a different fundament to base the rest of my life on. To build my whole life on the foundation that I am an ill person in a treatment process... that I have an illness that defines me... I feel that it will be a very unstable life. It will fall apart. So I have decided to find the most stable foundation possible... that's what I'm trying to do.(Male, Control group) An important difference between the patient groups was that patients with a contract for self-referral had several descriptions which seemed much closer to the ideal of living a normal life and being a normal person. They used active phrases when describing their everyday life, such as \"I know\", \"I will\", and \"I can\". They also acknowledged \"the good life\" and seemed to experience more closeness to a normal everyday life. Patients who had a contract talked about the normal life with more confidence and seemed more independent of social expectations and acceptable values. I: What is the point with having an ambulatory supervisor?P: You press an extra button and get feedback on things that might... that you do well or that people notice... that there are some positive things too. They emphasize it to strengthen your self-confidence. The self-confidence isn't very strong. You get set back from being admitted. I was away for nine months. I was admitted for nine months. Afterwards you are handicapped. While I was admitted I received my meals every day... some places they helped me get up in the morning. I was helped and supported all the time. And that was difficult to give up. But it is very good to have control over your own life. Cooking and having a nice time... to light candles and... to enjoy living.(Female, Intervention group) As part of a normal life, most participants wanted to study or work, and work was associated with identity and a good life. Many of the patients who received treatment as usual highlighted studies and education as a goal which might be difficult to reach, and some of them expressed a great deal of uncertainty regarding the possibility of coping with work on an everyday basis. P: I had my job until I realized that it was enough and then I quit.I: You quit?P: Yes. My contract expired... and at the same time I told them I had to quit.I: How was that for you?P: I stayed on the sofa for a week doing nothing. And then I woke up again. It felt like... like I recharged my strength.(Female, Control group) In contrast, many of the patients who had a contract for self-referral seemed to have progressed beyond having education or work as a distant goal. Several had taken the initiative to commence studies and had concrete plans for completion, and they did not mention having to interrupt the progress of their studies due to illness as a problem. They focused instead on the courage and strength necessary to continue and complete. P: I found out that the deadline for applications was expired, but I called today and they told me it was fine after all. I have a week to make an application for a [...] degree to finish the last two years... [...] If I can manage to work with art in some way, or with doing odd jobs. It is very exciting and I look forward to writing things this week and to look through my previous work.(Female, Intervention group)\n\nBODY.RESULTS.EXPERIENCES WITH A CONTRACT FOR SELF-REFERRAL OR TREATMENT AS USUAL:\nAlthough the patients were not asked directly about their experiences with the intervention (contract for self-referral or treatment as usual) some of the patients raised the subject on their own accord. Patients who had a contract for self-referral expressed a great deal of authority and strength when talking about the contract, their contact with the health services and what they needed in the future. Some said that the contract gave them more influence and that this gave them a sense of control. Some also described that they felt safe since they had the option to self-refer at any time: P: I believe in the project with self-referral. It is a pillar... where you know that if something happens which is really bad... You know that you can come here and get a break. It feels safe.(Female, Intervention group) Meanwhile, a few of the patients who did not have a contract for self-referral were very dissatisfied: P: I am very disappointed to be in the control-group [receiving treatment as usual]. I am done with the ordinary mental health services. I just want the self-referral inpatient contract.(Male, Control group)\n\nBODY.DISCUSSION.STRENGTHS AND LIMITATIONS:\nTwo researchers with extensive user experience in mental health services participated in different parts of this study. One participated in the data collection and took part in the initial analysis, while the other (DB) participated in the analysis process and in writing the manuscript. User participation during the research process strengthens the authenticity of the results and helps ensure that the users' voices are heard. Since this study was nested within a randomised controlled trial the interviews had to avoid direct questions about the intervention and focus instead on the participants' general perceptions of their current situation. This made the data material rich and focused on everyday life, which strengthens the quality of the findings. Although the interviewers were advised to avoid direct questions about the intervention, several of the participants talked about having or not having a contract. Some participants who had been randomised to treatment as usual were disappointed, and the interviewers had to respond to this. This feeling of disappointment may have led to negative expressions from those participants. Effort was made to not lead the participants to overstate the positive experiences with the intervention so as to please the interviewers, but the effectiveness of this effort is unclear. Although the sample is fair-sized for a qualitative interview study, the results might not be valid for other samples in other areas of health services. The interviews were mostly conducted by researchers who have worked as psychiatric nurses at the hospital. The nurses were chosen as interviewers because they were very competent health professionals and familiar to the patients. They were thus able to ensure that patients were taken good care of if they had adverse reactions during or after the interview. In addition, some participants stated that they would not have participated in interviews with unfamiliar interviewers. However, this familiarity might also have led to more favourable evaluations of the hospital's services and the intervention than if the interviews had been conducted by independent researchers. Moreover, since the participants in the two different groups were treated in the same health service organisation, they had the possibility to interact and discuss their experiences. Making comparisons based on qualitative interview data is a challenging undertaking. Since the aim of the study emerged during an initial reading of the transcripts, across group allocation, the authors were not \"blind\" before the more \"comparative\" analysis. We consider it a strength to pursue the initial surprising findings in the data material. However, at that point we should have included new and \"blind\" authors in the following analysis, as that would have strengthened the analysis process and the results. We analysed the interviews with patients in the control group before comparing the main themes from the control patients with the interviews with those of the intervention group. The differences between the groups were cross-checked with the data material and discussed in the analysis group to reduce the risk of overstating the differences. The differences presented in the results section are based on general impressions when reading and analysing the interview transcripts and must be interpreted with caution. Considering that several specific words and phrases were more present in either of the two groups, we could have used a computer-based system to compare the occurrences, which could have in turn supported the findings. Not all participants in the RCT participated in interviews after four months. Despite the initial randomisation, those agreeing to participate in interviews were self-selected. Direct comparison is therefore difficult. In addition, it is not clear whether the differences were due to the intervention (self-referral to treatment) or to other factors. A comparison of the groups on demographic variables (Table \n1) also shows that there are some differences between the groups: the participants in the control group were younger, had a higher rate of psychosis diagnosis, and had a higher proportion of males. The differences between the groups in terms of age, diagnosis and gender may have influenced the results of this study.\n\nBODY.DISCUSSION.DISCUSSION:\nThe results showed that many of the patients with a contract for self-referral expressed more confidence in strategies to cope with mental illness, and they seemed to apply more active cognitive strategies. Patients with a contract also expressed less resignation, hopelessness and powerlessness than patients without a contract, appearing to be closer to the ideal of living a \"normal\" life and being a \"normal\" person. Coping strategies are important in the management of mental illnesses\n[15, 16]. Such strategies are used to cope with life stressors, to ameliorate symptoms and to prevent periods of symptom aggravation. Most persons who suffer from psychosis apply one or several strategies to cope with mental symptoms\n[17]. Although there is little knowledge about the usefulness of specific coping strategies, research shows that having several strategies at hand is more effective than relying on just one, although there is not one specific coping strategy which is more effective than the others\n[17]. In the present study, the patients who had a contract for self-referral described coping strategies which appeared more advanced, especially the cognitive strategies they used to influence their thinking patterns. Strategies to manage mental health symptoms are often taught by professionals during treatment\n[18]. In addition to these, most persons suffering from mental health illnesses develop personal coping strategies\n[18]. In the present study, all patients described coping strategies that they had learned, and some of the patients described self-developed strategies. Compared to patients who received treatment as usual, patients with a contract for self-referral described new and different coping mechanisms. Previous evaluations of self-referral have shown that although the total frequency of hospitalizations increased, the number of hospitalization days in fact decreased by more than 30%. Furthermore, the patients reported that a contract for self-referral lessened the burden of having to \"convince\" professionals that hospitalization was necessary and to be \"sick enough\" to need inpatient treatment\n[11, 13]. In other evaluations, patients have also expressed that the opportunity to self-refer gave them safety and confidence to make new efforts and to challenge old behavioral patterns\n[12]. This might help account for the differences in coping strategies we found in the present study. Using coping strategies is in line with the recovery model. According to this model, a person with long-term mental illness takes an increasingly active role in managing the impact of the mental illness as they move towards recovery\n[19]. Coping strategies can be divided into reactive, anticipatory, preventive and proactive strategies\n[16], and can be placed on a recovery continuum, where proactive strategies are used by persons who have come farthest in the recovery process. One of the findings in the present study was that patients with a contract for self-referral described more confidence in applying coping strategies than patients without a contract. This indicates that the patients with a contract had more experience in successfully using coping strategies and thus had come further in the recovery process. They talked more about being active, growing, breaking barriers and taking advantage of opportunities. This finding is in line with previous research, where empowerment has been described as the most important factor for recovery\n[18]. The patients who had a contract for self-referral also described some coping strategies which sometimes ran counter to the advice typically given by professionals. This indicates that patients who have had the opportunity to self-refer to treatment felt more in control and more able to contradict or neglect advice from the mental health services. This might be due to the opportunity to be in charge of decisions regarding treatment which could increase autonomy and the feeling of being in control. In the present study, the participants contrasted the identity of being an ill person with that of being a normal person. The term \"illness identity\" has been described as a set of roles and attitudes that persons with a mental illness have developed\n[2]. The participants in the present study related the perception of being a person with an illness to the possibility of being a normal person capable of leading a normal life. Perceiving oneself as an ill person is closely linked to feelings of stigmatization, which is connected to low self-esteem and low self-efficacy\n[20, 21]. The perception of an illness identity has been described as an important part of the recovery process\n[2]. Some have linked illness identity to hope and self-esteem, which is in turn linked to coping, engagement, and social interaction\n[2]. Coping, engagement and social interaction subsequently have an impact on vocational outcomes and symptom severity. Authors have argued that changing illness identity is an important step in improving mental health illness and functioning\n[2]. The results from the present study indicate that patients with a contract for self-referral might have a more clarified relationship towards their illness identity than patients without a contract, therefore suggesting that the former group may have come further in the recovery process. That said, it is not self-evident that having a contract for self-referral to treatment necessarily impacts illness identity, especially since the present study is based on interviews only four months after signing a contract for self-referral. It would be important to further investigate whether an opportunity to self-refer leads to increased autonomy, sense of control and decision-making power, all of which might subsequently impact the perception of illness identity. Another important finding in the present study was that the patients who had a contract for self-referral tended to express less powerlessness, resignation and hopelessness than those who received treatment as usual. The patients who received treatment as usual referred to their problems as something that they were stuck with, and they described themselves as being dependent on care from the health services. This difference in focus might be related to the finding that patients in the intervention group had stronger confidence in using coping strategies. Some have argued that stronger hope and self-esteem might lead to less avoidance, more problem-oriented strategies and more social interaction\n[2]. Previous research has shown that persons with more hope and insight in their illness use more problem-oriented and less avoidant coping strategies\n[22] and that poor confidence is connected to using avoidant strategies\n[23]. These findings are in line with the present study, which also illustrates Bandura's idea of self-efficacy\n[24]. Self-efficacy is explained as a personal capability and distinguishes one's belief in capability from self-esteem and whether you like yourself\n[25]. In the present study, we found that patients with a contract for self-referral talked more about living a normal life with work and studies. They seemed to have a closer relationship to what they called a \"normal life\" and a somewhat more active attitude towards coping, thereby possibly exhibiting a greater sense of control. Davidson and Roe argue that recovery has to be pursued actively by the patient, and that not everybody does this\n[3]. The findings in the present study indicate that patients who have had the opportunity to decide whether and when they needed referral to inpatient treatment are more actively pursuing recovery, both from and in mental illness. In this context signing a contract for self-referral seems to be a rehabilitation approach that encourages and supports activity, self-determination, control, and participation.\n\nBODY.CONCLUSION:\nPatients with a contract for self-referral to treatment appeared to have a wider range of coping mechanisms, more self-developed and advanced strategies, and stronger confidence in applying the strategies. The findings indicate that having the opportunity to self-refer might help strengthen autonomy, sense of control, and self-determination. According to research on coping strategies, these patients could be better off both during treatment and in the recovery process. In a similar vein, patients with a contract voiced less resignation, hopelessness and powerlessness and appeared to be closer to experiencing a \"normal\" life.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nAdditional file 1:\nInterview guide - Self-referral inpatient treatment.\n(DOCX 29 KB)\n\n**Question:** Compared to Inpatient treatment what was the result of Self-referral treatment on Self-confidence in applying methods to deal with mental illness?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
1035
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How do patients with severe mental diagnosis cope in everyday life - a qualitative study comparing patients’ experiences of self-referral inpatient treatment with treatment as usual?\n\n ABSTRACT.BACKGROUND:\nSeveral hospitals in Norway provide short self-referral inpatient treatment to patients with severe mental diagnosis. No studies have compared the experiences of patients who have had the opportunity to self-refer to inpatient treatment with patients who have received treatment as usual. This qualitative study was nested within a randomised controlled trial investigating the effect of self-referral to inpatient treatment. The aim was to explore how patients with severe mental diagnosis coped four months after signing a contract for self-referral, as compared to patients receiving treatment as usual.\n\nABSTRACT.METHODS:\nData was collected using qualitative individual interviews with patients with severe mental diagnosis, conducted four months after being randomised either to a contract for self-referral (intervention group) or to treatment as usual (control group).\n\nABSTRACT.RESULTS:\nTwenty-five patients participated in interviews - 11 from the intervention group and 14 from the control group. Results four months after randomisation showed that patients with a contract for self-referral appeared to have more confidence in strategies to cope with mental illness and to apply more active cognitive strategies. Patients with a contract also expressed less resignation, hopelessness and powerlessness than patients without a contract. In addition, patients with a contract seemed to be closer to the ideal of living a \"normal\" life and being a \"normal\" person.\n\nABSTRACT.CONCLUSION:\nThe results indicate that the patients who had a contract for self-referral had come further in the recovery process and should possibly be better off during treatment.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/1472-6963-14-347) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nMental health services have changed considerably during the last decades. In Norway, an increasing number of mental health service users live outside institutions and receive health care from community health services, ambulatory service teams or outpatient treatment in community mental health centers\n[1]. The focus has also shifted from perceiving severe mental illness as a permanent diagnosis towards focusing on what facilitates full or partial recovery\n[2]. Recovery is described as a process where the illness's negative impact is minimized by focusing on the patient's strengths and interests\n[3]. This shift has also moved the delivery of mental health services towards less institutional treatment and more integration in the patient's community\n[4]. Ambulatory health services and outpatient treatment are considered to be important and cost-efficient means of strengthening the service user's independence and responsibility for the treatment\n[4]. Outreach mental health services via community teams are widespread in the US and in Europe, especially for severe and chronic mental problems such as psychosis and schizophrenia\n[5], and are also provided in crisis\n[6]. It has been argued that institutional hospitalization should be reduced, and that when inpatient treatment is necessary, restraints should be minimized and patient control and self-determination strengthened\n[4]. Health service users have described that services at home, for instance from ambulatory acute treatment teams, have increased feelings of being in control and being seen and heard, making the users feel more responsible for the treatment\n[7]. Several qualitative studies have shown that the experience of control, autonomy and power to make choices is connected to improved quality of life\n[8], thereby underlining the importance of such experiences. Although crisis resolution is increasingly provided in the service user's home\n[6], periodically inpatient treatment is still necessary for patients with severe mental illness\n[9]. Mental health service users have emphasized that services should be flexible and provide safe and predictable care in phases with strong symptoms, as well as facilitating the user's responsibility and ability to cope in phases with fewer symptoms\n[10]. This emphasis suggests a combination of services such as home-based care to strengthen coping in good phases, and, periodically, inpatient care in phases with more symptoms. Even when inpatient treatment is necessary, it is essential to encourage and maintain the patient's control and responsibility. One way to achieve this is to give patients the opportunity to self-refer to inpatient treatment. The intention behind self-referral inpatient treatment is, in addition to reduce health service costs, to empower the service user and thereby improve symptoms as well as increase responsibility, coping, and quality of life. Giving service users the opportunity to self-refer provides the users with decision-making power during severe mental illness, and should therefore transfer responsibility and power to the service users. Several mental health hospitals in Norway provide self-referral to short inpatient treatment for some service users\n[11–14]. Patients can refer themselves to short inpatient treatment when they feel that this is necessary. Such hospitalization is meant to be in line with the service user's wishes and needs, and the service user is considered the best judge of when inpatient treatment is necessary. The proposed effects of self-referral inpatient treatment include reduced numbers of inpatient hospitalizations, fewer acute hospitalizations in crisis, more appropriate use of inpatient treatment, improved quality of life, and improved coping\n[11–14]. So far, there is no robust evidence for these effects. Internal evaluations have shown that the total amount of inpatient treatment can be reduced when service users have the opportunity to self-refer. One of these evaluations (published in Norway) indicated that the opportunity to self-refer for up to five days of inpatient treatment increased the total frequency of hospitalizations but decreased the number of hospitalization days by over 30%. In addition, the total amount of coerced hospitalizations was reduced by 50% six months after signing the contract\n[13]. Service users have also reported satisfaction with the opportunity to self-refer and have emphasized that services became more predictable, increased the feeling of safety and reduced symptoms\n[11]. Moreover, patients have reported that self-referral gave them more choices and a stronger feeling of control and autonomy\n[11, 12] A randomised controlled study is currently investigating the effect of self-referral inpatient treatment in a community health center in Mid-Norway. This paper describes a qualitative study nested into this randomised controlled trial. The aim was to explore the patients' experiences with the intervention. It is reasonable to believe that self-referral to inpatient treatment will influence the patients' experience of control and autonomy. Exploring the patients' experiences would provide valuable knowledge about a new and growing treatment approach in mental health services. No studies thus far have compared the experiences of patients who have the opportunity to self-refer with patients who receive treatment as usual.\n\nBODY.BACKGROUND.AIM:\nThe aim of the study was to compare the experiences of service users who had signed a contract for self-referral to inpatient treatment with the experiences of users who received treatment as usual, four months after randomization.\n\nBODY.METHODS.DESIGN AND SETTING:\nThis was a qualitative interview study involving patients at a community mental health center in Mid-Norway. The study was nested into a randomised controlled trial (RCT) investigating the effect of a contract for self-referral inpatient treatment on the numbers of hospitalizations and hospitalization days, mental health symptoms and functioning, quality of life and coping (trial registration no. NCT01133587). Patients randomised to the intervention group in the RCT signed a contract with the hospital. This contract gave the patients authority to refer themselves to up to five days of inpatient hospitalization when they felt that such treatment was necessary. In addition to any hospitalizations initiated by health personnel, patients with a contract could thus admit themselves without contacting their doctor, duty doctor or emergency department. The contract was an internal written agreement between the patient and the mental health center and had legal status as such. The contract was an addition to the regular treatment offered and did not replace the patients' regular treatment plan. Patients randomised to the control group received treatment as usual and were offered a contract for self-referral after taking part in the RCT.\n\nBODY.METHODS.PARTICIPANTS AND RECRUITMENT:\nInclusion criteria for the RCT were a diagnosis of psychosis or bipolar disorders with or without abuse problems. All participants were in need of long-term treatment in the hospital unit and the psychiatric community service. Health professionals with treatment responsibilities suggested participants for the randomized trial. All participants in the RCT were asked to participate in individual qualitative interviews.\n\nBODY.METHODS.DATA COLLECTION:\nTwenty-five patients were interviewed four months after randomisation - 11 in the intervention group and 14 in the control group. The interviews were conducted between November 2010 and December 2012, either in the hospital or in the patient's home. Three of the authors (MR, IEOM and GHE) conducted the interviews. All are psychiatric nurses and have extensive clinical experience. A research associate with extensive user experience participated in some of the interviews as well. The interviews lasted between 20 and 60 minutes. Themes in the interviews were taken from a pre-defined, semi-structured interview guide with open-ended questions (Additional file\n1). The main questions in the interviews related to how the patients experienced their current mental health problems and everyday life. Typical questions were: \"How is your life at present?\" and \"How is it for you when you are feeling ill?\" All participants were invited to elaborate on what they did to prevent symptoms and maintain stability, on their experiences of previous and current symptoms and on current or previous experiences with mental health services. To avoid influencing the participants during the RCT and to ask the same questions of patients in both groups, no participants were asked directly about their experiences with the interventions. The interview questions were therefore general and focused on the participants' overall situation at the time. The interventions thus became topics in the interviews only if the participants brought it up themselves. All interviews were transcribed verbatim, although redundant words and pauses were deleted and local dialect was translated to written language. The quotes used in the result section were translated into English by the first author (MBR) and verified by the rest of the author group.\n\nBODY.METHODS.ANALYSIS:\nThe analysis was conducted by a group consisting of a clinical psychologist, a researcher with extensive user experience, three psychiatric nurses, and an academic researcher on public health. The aim of this study emerged during an initial reading of the transcripts with a different research question in mind and with the intention to analyze the interviews across group allocation. Since there seemed to be several differences between the descriptions of the participants in the two groups, we chose to pursue these differences further. The authors were at that point already aware of the participants' group allocation, and we did not add new or \"blinded\" authors to the group. The interviews with patients from the control group (treatment as usual) were read and coded first, from which a list of codes (meaning units) and main themes were made. The coding process was conducted during several meetings in the author group. Then, the interviews with the patients from the intervention group (contract for self-referral) were coded. The code list and main themes from these interviews were compared with the codes and themes from the control patients. The main themes common for both groups were: Main themes from interviews: Experiences of disease and symptoms Strategies of copingShieldingStructure (food, sleep, medication)DistractionsSocial lifeThinking strategies Identity as a person with an illness Feeling safe Relationships (family, friends) Work and studies Dignity and power Values in life A normal life Four differences between the intervention group and the control group were found, which were then outlined and discussed in the author group. After the group analysis and discussion the first author (MBR) recoded all interviews and cross-checked the results with the transcripts to challenge the findings and look for different interpretations. The process confirmed the findings from the group's analysis. Quotes from the interviews were chosen to illustrate and substantiate the results. Quotes used in the result presentation are marked with the speaker's gender and study group (intervention or control). The final results were discussed and approved by the author group during the writing process.\n\nBODY.METHODS.ETHICS:\nThe regional committee for medical ethics in Central Norway approved of the study, which was registered with the Norwegian Data Inspectorate. All participants were given verbal and written information about the study, and signed a consent form before taking part in interviews. All participants were given the opportunity to make contact with the staff at the department after the interview.\n\nBODY.RESULTS:\nTwenty-five patients participated in this study, 11 had a contract for self-referral to treatment (intervention group) and 14 received treatment as usual (control group). Nine of the participants were female. The mean age was 41 years (range 21-60 years). The participants had one or several diagnoses, including psychosis, bipolar disorders and substance use. Eight of the participants had a job or were studying. The sample is described in Table \n1.Table 1\nCharacteristics of participants\nVariablesTotal sampleIntervention (contract for self-referral)Control (treatment as usual)No. of participants251114GenderFemales963Age (mean (range))41 (21-60)44 (31- 58)33 (21-60)Diagnosis (some have more than one)Psychosis (ICD-10; F 20-29)19712Bipolar disorders (ICD-10; F 30-39)642Substance use (ICD-10; F 10-19)633Others (ICD-10; F 60-61)211No. of years since receiving diagnosis (mean (range))9 (2-23)10 (4-19)8 (2-23)EmploymentPaid work321Disability benefits1477Sick leave3-3StudiesStudies or serious plans to study523 The results present four aspects where the experiences of patients with a contract for self-referral appeared different than the experiences of patients who received treatment as usual. Patients with a contract expressed stronger confidence in strategies to cope with mental illness and seemed to apply more active cognitive strategies. Patients with a contract also had fewer expressions of resignation, hopelessness and powerlessness than patients who received treatment as usual. In addition, patients with a contract appeared to be closer to the ideal of living a \"normal\" life and being a \"normal\" person. The differences between the groups are described in light of what were common for all participants.\n\nBODY.RESULTS.STRONGER CONFIDENCE IN COPING STRATEGIES:\nAll participants described several strategies to prevent and cope with symptom aggravation. They emphasized the importance of living a structured life with regular meals, sleep and medication, as well as avoiding smoking, excessive alcohol and drugs. Participants also talked about strategies to shield and distract themselves. Common shielding strategies were to be quiet, to be alone, to relax, and to avoid an excess of impressions, such as too many people or TV programs. Common distraction strategies were to watch TV, play computer games, read books, exercise and work, all of which could help curb anxiety, voices and visions. The participants also described that being with people or animals could both act as distraction, and provide meaningful relationships and experiences. These interactions could help to prevent and diminish symptoms. Overall, the participants described that a balance between activity and rest was necessary to cope with the mental illness. P: I have discovered that the more I talk to other people... the more stable I am. It is a sort of self-treatment or self-medication. To actively seek contact with other people every day. Some days, when I can't get in touch with anybody I walk down to the city and look at people. I do that because I don't want to be alone. That's a bad idea. Other days I have a strong need to be alone. Then I withdraw, turn off the phone and don't meet a single person during the whole day. I sit all day and watch TV-series or DVDs or play Playstation games or read a book. And these days come and go. There is no pattern – some days are like this and some days are like that.[...] I: So books help you?P: Yes, it is good to have something where things happen. [...] When I read I disappear a little. I disappear a little from this world. I just dream myself into the book. It is very relaxing. And it is a good way to ignore the things I don't want to think about... things that are stressful to think about.(Male, Control group) Although all participants identified coping strategies, there were some differences between the ways patients with and without a contract for self-referral described them. Patients with a contract voiced stronger self-confidence in applying coping strategies, both to prevent symptoms and to handle symptom aggravation. These patients also expressed greater confidence in keeping a good balance between activity and rest. Their descriptions appeared to include more experiences in applying and mastering various coping strategies. While patients without a contract described fairly common strategies to cope and prevent symptoms, patients with a contract appeared to have stronger and more independent creativity when describing how they prevented and coped with poorer phases. Some of the patients with a contract also described new and different ways to cope with their illness. Some of these strategies contradicted the advice usually given by professionals, such as preferring to be alone instead of having an extended social life. One patient, for example, expressed a deep satisfaction with having a certain distance from other people: P: I like to eat alone. I do. I have discovered that I like to do some things on my own. [...]I have two friends at present and we have a good distance... a good distance. We text each other and meet maybe once a week, and I really appreciate that.I: The friendship?P: Yes, the distance when we meet so rarely. We are in touch on the phone... to state that we are fine. [...] I read somewhere that friendship is what you think about the other person [...] having a presence even though they are not present.(Male, Intervention group)\n\nBODY.RESULTS.COGNITIVE STRATEGIES TO INFLUENCE THINKING PATTERNS:\nAll participants reported that they experienced anxiety, mood swings and negative thoughts. Almost all participants also talked about disturbing voices and visions, describing cognitive techniques to cope with these symptoms. In addition, all patients relayed that their thinking was associated with control over their situation. Despite these commonalities, comparison showed that the participants used these techniques differently. Patients without a contract talked mostly about cognitive strategies as a means of adapting to the illness and helping them to do their best. They used words such as \"observing\", \"understanding\", and \"recognizing\" symptoms and emphasized accepting the illness. Their main focus centered on being conscious about their thoughts and views and on achieving more knowledge about the nature of the symptoms in order to decrease anxiety when those symptoms rose. One participant described cognitive strategies such as observing and recording the physical surroundings during strong anxiety and claustrophobia: P: I am supposed to observe and notice details in rooms and such... when these things happen. I don't know if it has helped so much. It has become worse and worse. And the feeling is very strong when I am in the midst of it. I almost have to vomit. It affects me physically.(Female, Control group) Meanwhile, patients with a contract talked less about recognizing, understanding and accepting. Instead, they used words such as \"power\", \"control\" and \"strength\". They also displayed stronger ability to influence their thinking patterns through the use of cognitive strategies. Furthermore, these patients talked more about being active and working with themselves in order to grow, to break barriers and to take opportunities. Some of them questioned if the voices and views were true, or whether there were other ways to interpret and understand them: P: [...] I used to think that everybody knew more than me about certain things. I don't anymore. [...] It's just confusing... and that's a difficult state. You become unsure of yourself and that's heavy. It's not that I know so much, but what do other people know? [...] I read for instance in a book about what happens after death. And I said to myself: How can he [the author] know more about that than me? Has he been there - or is he sitting on this side and imagining what happens afterwards? These simple things... I use tactics like these. Then it is not so bad. [...]It still annoys me when someone comes and tells me I can never get rid of it [the illness]. It will probably never be completely over, but... [...] There are limits to what I don't know [laughs]. It is important to believe in the experiences you have. Others might have similar or opposite experiences, but they are not necessarily stronger or more correct.(Male, Intervention group) The results also indicated that patients with a contract had more experience and more control over the cognitive techniques, as well as a deeper understanding of the relationship between thoughts and feelings. They seemed to be more focused on the possibility of choosing cognitive strategies to influence their thinking. While patients without a contract described some efforts to manage being in a painful situation, patients with a contract presented a wider variety of techniques to influence their mental state. This enabled them to change their ways of thinking and thus influenced their overall situation. P: Nowadays, I read a lot of self-help books. I used to be very skeptical... they had very simple solutions, but now I find them developing.I: What do you get from them?P: I guess they speak to my own problems. You work with yourself while you read. For example to shut out the past... that is described in all self-help books. To have bulkheads forwards and backwards... To think less about worries for the future and self-reproach about the past. That resonates with me... and then the brain works with it while I read.(Male, Intervention group)\n\nBODY.RESULTS.RESIGNATION AND POWERLESSNESS:\nAll participants talked about the experience of being ill. Most of them described experiences of psychosis, explaining how emotions were experienced as bodily phenomena and how the symptoms were placed in their lives. Most of the participants also talked about negative and painful experiences, both with previous and current illness and with previous and current treatment. There seemed, however, to be a difference in the way that participants in the two groups talked about these experiences. Patients in the control group expressed more resignation and passivity, giving the impression that they were stuck in these experiences. They used phrases such as \"yielding\", \"enduring\", \"resigning\", \"giving in\" and \"getting used to\". Their descriptions included being powerless when facing their lives and problems, a stronger need for care and being more dependent on health services. These patients gave several examples of experiencing hopelessness and powerlessness. Although there were examples of interviews where feelings of helplessness and resignation were less prominent, the following examples represent very common descriptions for the control group participants: P: I have accepted the situation I am in. I think I have resigned a little. I have withdrawn from the things that I will never experience.I: Like what?P: Well... [sighs]... my life situation, for instance. I have for instance not had a partner in years... and it is quite stressful because you get very lonely. And after some time you come to a place where you realize that the hopelessness has come to stay.(Male, Control group) Patients who had a contract for self-referral, on the other hand, discussed courage and strength and talked about conscious choices and opinions. Their descriptions reflected being in a better place in their lives, and they seemed more active and less dependent on others. These patients expressed that they had influence over their overall situation. P: Well... how do you move on? It is to work... work with yourself. To locate the problems. Some would say to accept the problems, but that sounds a bit defensive to me. Maybe you get there... when you work with your problems they become more distant. You have to learn to like yourself until you get so sure that you are not put off by any little comment... like I used to be. It was... I had a word that emerged in my head during the last hospitalization: \"Show off\". That I was a show off. Like I knew it all, and wanted to have a piece of everyone else. But then I asked a friend of mine who was admitted during the same period: Do you think I am a show off? And she laughed a little. That is a way to get your problems more distant. Is it really like this? Is it possible? To be confident... to be sure... that you throw it around instead.(Male, Intervention group) There were also a few examples of patients with a contract who seemed to have achieved some distance from their previous bad experiences, as if they had success in moving on with their lives. P: And time heals all wounds. That's how it is. That's hard to say when you are ill, but when you are recovering you see that it is a one way street. Even though you might be temporarily dumped or feel bad and upset.I: Like a crisis?P: Yes, the crisis lasts around two hours. It's like that. I don't know how other people feel, but when I am ill I feel it could last forever. I am scared that it will be a permanent death row, but it doesn't. It takes shorter and shorter time before I am well again. And that is reassuring.[...] The problems I have aren't smaller or steeper, only shorter. It feels like I can put them behind me. Previously I could have days and weeks with the same problems and feel thrown around like a ball. Now it takes three or four hours.(Male, Intervention group)\n\nBODY.RESULTS.BEING NORMAL AND LIVING A NORMAL LIFE:\nParticipants in both groups talked about the importance of having a normal life, doing normal things, and feeling like everyone else. A common theme was being identified with the illness, and many emphasized that being ill was not their sole identity. The patients reflected on their own status as persons with mental illness, expressing a desire for an identity beyond the role of a patient. P: I have a diagnosis, and that makes me ill – in medical terms. But it is a difference between that and perceiving yourself as an ill person. Cause I am only a guy. I'm just me.[...] I would like to build a different fundament to base the rest of my life on. To build my whole life on the foundation that I am an ill person in a treatment process... that I have an illness that defines me... I feel that it will be a very unstable life. It will fall apart. So I have decided to find the most stable foundation possible... that's what I'm trying to do.(Male, Control group) An important difference between the patient groups was that patients with a contract for self-referral had several descriptions which seemed much closer to the ideal of living a normal life and being a normal person. They used active phrases when describing their everyday life, such as \"I know\", \"I will\", and \"I can\". They also acknowledged \"the good life\" and seemed to experience more closeness to a normal everyday life. Patients who had a contract talked about the normal life with more confidence and seemed more independent of social expectations and acceptable values. I: What is the point with having an ambulatory supervisor?P: You press an extra button and get feedback on things that might... that you do well or that people notice... that there are some positive things too. They emphasize it to strengthen your self-confidence. The self-confidence isn't very strong. You get set back from being admitted. I was away for nine months. I was admitted for nine months. Afterwards you are handicapped. While I was admitted I received my meals every day... some places they helped me get up in the morning. I was helped and supported all the time. And that was difficult to give up. But it is very good to have control over your own life. Cooking and having a nice time... to light candles and... to enjoy living.(Female, Intervention group) As part of a normal life, most participants wanted to study or work, and work was associated with identity and a good life. Many of the patients who received treatment as usual highlighted studies and education as a goal which might be difficult to reach, and some of them expressed a great deal of uncertainty regarding the possibility of coping with work on an everyday basis. P: I had my job until I realized that it was enough and then I quit.I: You quit?P: Yes. My contract expired... and at the same time I told them I had to quit.I: How was that for you?P: I stayed on the sofa for a week doing nothing. And then I woke up again. It felt like... like I recharged my strength.(Female, Control group) In contrast, many of the patients who had a contract for self-referral seemed to have progressed beyond having education or work as a distant goal. Several had taken the initiative to commence studies and had concrete plans for completion, and they did not mention having to interrupt the progress of their studies due to illness as a problem. They focused instead on the courage and strength necessary to continue and complete. P: I found out that the deadline for applications was expired, but I called today and they told me it was fine after all. I have a week to make an application for a [...] degree to finish the last two years... [...] If I can manage to work with art in some way, or with doing odd jobs. It is very exciting and I look forward to writing things this week and to look through my previous work.(Female, Intervention group)\n\nBODY.RESULTS.EXPERIENCES WITH A CONTRACT FOR SELF-REFERRAL OR TREATMENT AS USUAL:\nAlthough the patients were not asked directly about their experiences with the intervention (contract for self-referral or treatment as usual) some of the patients raised the subject on their own accord. Patients who had a contract for self-referral expressed a great deal of authority and strength when talking about the contract, their contact with the health services and what they needed in the future. Some said that the contract gave them more influence and that this gave them a sense of control. Some also described that they felt safe since they had the option to self-refer at any time: P: I believe in the project with self-referral. It is a pillar... where you know that if something happens which is really bad... You know that you can come here and get a break. It feels safe.(Female, Intervention group) Meanwhile, a few of the patients who did not have a contract for self-referral were very dissatisfied: P: I am very disappointed to be in the control-group [receiving treatment as usual]. I am done with the ordinary mental health services. I just want the self-referral inpatient contract.(Male, Control group)\n\nBODY.DISCUSSION.STRENGTHS AND LIMITATIONS:\nTwo researchers with extensive user experience in mental health services participated in different parts of this study. One participated in the data collection and took part in the initial analysis, while the other (DB) participated in the analysis process and in writing the manuscript. User participation during the research process strengthens the authenticity of the results and helps ensure that the users' voices are heard. Since this study was nested within a randomised controlled trial the interviews had to avoid direct questions about the intervention and focus instead on the participants' general perceptions of their current situation. This made the data material rich and focused on everyday life, which strengthens the quality of the findings. Although the interviewers were advised to avoid direct questions about the intervention, several of the participants talked about having or not having a contract. Some participants who had been randomised to treatment as usual were disappointed, and the interviewers had to respond to this. This feeling of disappointment may have led to negative expressions from those participants. Effort was made to not lead the participants to overstate the positive experiences with the intervention so as to please the interviewers, but the effectiveness of this effort is unclear. Although the sample is fair-sized for a qualitative interview study, the results might not be valid for other samples in other areas of health services. The interviews were mostly conducted by researchers who have worked as psychiatric nurses at the hospital. The nurses were chosen as interviewers because they were very competent health professionals and familiar to the patients. They were thus able to ensure that patients were taken good care of if they had adverse reactions during or after the interview. In addition, some participants stated that they would not have participated in interviews with unfamiliar interviewers. However, this familiarity might also have led to more favourable evaluations of the hospital's services and the intervention than if the interviews had been conducted by independent researchers. Moreover, since the participants in the two different groups were treated in the same health service organisation, they had the possibility to interact and discuss their experiences. Making comparisons based on qualitative interview data is a challenging undertaking. Since the aim of the study emerged during an initial reading of the transcripts, across group allocation, the authors were not \"blind\" before the more \"comparative\" analysis. We consider it a strength to pursue the initial surprising findings in the data material. However, at that point we should have included new and \"blind\" authors in the following analysis, as that would have strengthened the analysis process and the results. We analysed the interviews with patients in the control group before comparing the main themes from the control patients with the interviews with those of the intervention group. The differences between the groups were cross-checked with the data material and discussed in the analysis group to reduce the risk of overstating the differences. The differences presented in the results section are based on general impressions when reading and analysing the interview transcripts and must be interpreted with caution. Considering that several specific words and phrases were more present in either of the two groups, we could have used a computer-based system to compare the occurrences, which could have in turn supported the findings. Not all participants in the RCT participated in interviews after four months. Despite the initial randomisation, those agreeing to participate in interviews were self-selected. Direct comparison is therefore difficult. In addition, it is not clear whether the differences were due to the intervention (self-referral to treatment) or to other factors. A comparison of the groups on demographic variables (Table \n1) also shows that there are some differences between the groups: the participants in the control group were younger, had a higher rate of psychosis diagnosis, and had a higher proportion of males. The differences between the groups in terms of age, diagnosis and gender may have influenced the results of this study.\n\nBODY.DISCUSSION.DISCUSSION:\nThe results showed that many of the patients with a contract for self-referral expressed more confidence in strategies to cope with mental illness, and they seemed to apply more active cognitive strategies. Patients with a contract also expressed less resignation, hopelessness and powerlessness than patients without a contract, appearing to be closer to the ideal of living a \"normal\" life and being a \"normal\" person. Coping strategies are important in the management of mental illnesses\n[15, 16]. Such strategies are used to cope with life stressors, to ameliorate symptoms and to prevent periods of symptom aggravation. Most persons who suffer from psychosis apply one or several strategies to cope with mental symptoms\n[17]. Although there is little knowledge about the usefulness of specific coping strategies, research shows that having several strategies at hand is more effective than relying on just one, although there is not one specific coping strategy which is more effective than the others\n[17]. In the present study, the patients who had a contract for self-referral described coping strategies which appeared more advanced, especially the cognitive strategies they used to influence their thinking patterns. Strategies to manage mental health symptoms are often taught by professionals during treatment\n[18]. In addition to these, most persons suffering from mental health illnesses develop personal coping strategies\n[18]. In the present study, all patients described coping strategies that they had learned, and some of the patients described self-developed strategies. Compared to patients who received treatment as usual, patients with a contract for self-referral described new and different coping mechanisms. Previous evaluations of self-referral have shown that although the total frequency of hospitalizations increased, the number of hospitalization days in fact decreased by more than 30%. Furthermore, the patients reported that a contract for self-referral lessened the burden of having to \"convince\" professionals that hospitalization was necessary and to be \"sick enough\" to need inpatient treatment\n[11, 13]. In other evaluations, patients have also expressed that the opportunity to self-refer gave them safety and confidence to make new efforts and to challenge old behavioral patterns\n[12]. This might help account for the differences in coping strategies we found in the present study. Using coping strategies is in line with the recovery model. According to this model, a person with long-term mental illness takes an increasingly active role in managing the impact of the mental illness as they move towards recovery\n[19]. Coping strategies can be divided into reactive, anticipatory, preventive and proactive strategies\n[16], and can be placed on a recovery continuum, where proactive strategies are used by persons who have come farthest in the recovery process. One of the findings in the present study was that patients with a contract for self-referral described more confidence in applying coping strategies than patients without a contract. This indicates that the patients with a contract had more experience in successfully using coping strategies and thus had come further in the recovery process. They talked more about being active, growing, breaking barriers and taking advantage of opportunities. This finding is in line with previous research, where empowerment has been described as the most important factor for recovery\n[18]. The patients who had a contract for self-referral also described some coping strategies which sometimes ran counter to the advice typically given by professionals. This indicates that patients who have had the opportunity to self-refer to treatment felt more in control and more able to contradict or neglect advice from the mental health services. This might be due to the opportunity to be in charge of decisions regarding treatment which could increase autonomy and the feeling of being in control. In the present study, the participants contrasted the identity of being an ill person with that of being a normal person. The term \"illness identity\" has been described as a set of roles and attitudes that persons with a mental illness have developed\n[2]. The participants in the present study related the perception of being a person with an illness to the possibility of being a normal person capable of leading a normal life. Perceiving oneself as an ill person is closely linked to feelings of stigmatization, which is connected to low self-esteem and low self-efficacy\n[20, 21]. The perception of an illness identity has been described as an important part of the recovery process\n[2]. Some have linked illness identity to hope and self-esteem, which is in turn linked to coping, engagement, and social interaction\n[2]. Coping, engagement and social interaction subsequently have an impact on vocational outcomes and symptom severity. Authors have argued that changing illness identity is an important step in improving mental health illness and functioning\n[2]. The results from the present study indicate that patients with a contract for self-referral might have a more clarified relationship towards their illness identity than patients without a contract, therefore suggesting that the former group may have come further in the recovery process. That said, it is not self-evident that having a contract for self-referral to treatment necessarily impacts illness identity, especially since the present study is based on interviews only four months after signing a contract for self-referral. It would be important to further investigate whether an opportunity to self-refer leads to increased autonomy, sense of control and decision-making power, all of which might subsequently impact the perception of illness identity. Another important finding in the present study was that the patients who had a contract for self-referral tended to express less powerlessness, resignation and hopelessness than those who received treatment as usual. The patients who received treatment as usual referred to their problems as something that they were stuck with, and they described themselves as being dependent on care from the health services. This difference in focus might be related to the finding that patients in the intervention group had stronger confidence in using coping strategies. Some have argued that stronger hope and self-esteem might lead to less avoidance, more problem-oriented strategies and more social interaction\n[2]. Previous research has shown that persons with more hope and insight in their illness use more problem-oriented and less avoidant coping strategies\n[22] and that poor confidence is connected to using avoidant strategies\n[23]. These findings are in line with the present study, which also illustrates Bandura's idea of self-efficacy\n[24]. Self-efficacy is explained as a personal capability and distinguishes one's belief in capability from self-esteem and whether you like yourself\n[25]. In the present study, we found that patients with a contract for self-referral talked more about living a normal life with work and studies. They seemed to have a closer relationship to what they called a \"normal life\" and a somewhat more active attitude towards coping, thereby possibly exhibiting a greater sense of control. Davidson and Roe argue that recovery has to be pursued actively by the patient, and that not everybody does this\n[3]. The findings in the present study indicate that patients who have had the opportunity to decide whether and when they needed referral to inpatient treatment are more actively pursuing recovery, both from and in mental illness. In this context signing a contract for self-referral seems to be a rehabilitation approach that encourages and supports activity, self-determination, control, and participation.\n\nBODY.CONCLUSION:\nPatients with a contract for self-referral to treatment appeared to have a wider range of coping mechanisms, more self-developed and advanced strategies, and stronger confidence in applying the strategies. The findings indicate that having the opportunity to self-refer might help strengthen autonomy, sense of control, and self-determination. According to research on coping strategies, these patients could be better off both during treatment and in the recovery process. In a similar vein, patients with a contract voiced less resignation, hopelessness and powerlessness and appeared to be closer to experiencing a \"normal\" life.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nAdditional file 1:\nInterview guide - Self-referral inpatient treatment.\n(DOCX 29 KB)\n\n**Question:** Compared to Inpatient treatment what was the result of Self-referral treatment on Patients striving for recovery?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
1163
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Pioglitazone Decreases Hepatitis C Viral Load in Overweight, Treatment Naïve, Genotype 4 Infected-Patients: A Pilot Study\n\n ABSTRACT.BACKGROUND:\nInsulin resistance (IR) is induced by chronic hepatitis C virus (HCV) genotypes 1 and 4 infections. It is not known whether drugs that affect IR such as Pioglitazone and Prednisone also affect serum HCV RNA titers independently of PEG-Interferon-α2/ribavirin treatment. The primary aim was to assess whether Pioglitazone by improving IR and/or inflammation decreases HCV viral load independently of standard of care HCV treatment. A secondary aim was to assess whether Prednisone, a drug that induces insulin resistance and stimulates HCV viral entry and replication in replicon culture systems, increases HCV viral load in this population.\n\nABSTRACT.METHODOLOGY/PRINCIPAL FINDINGS:\nWe designed a two-arm, parallel Pilot Study of overweight, treatment naïve genotype 4 HCV-infected patients at a public referral Liver Clinic in Giza, Egypt. The subjects received Pioglitazone (30 mg/day for 14 days) or Prednisone (40 mg/day for 4 days) in a randomized fashion, but the two arms can be considered independent pilot studies. Only changes from baseline within each arm were assessed and no contrasts of the interventions were made, as this was not an aim of the study. Among 105 consecutive HCV genotype 4 patients, 39 were enrolled based on the optimal sample size and power analysis according to the CONSORT statement; 20 to the Pioglitazone group and 19 to the Prednisone group. Pioglitazone was effective in decreasing serum HCV RNA at day-14 (n = 10; difference of means = 205,618 IU/ml; 95% CI 26,600 to 384,600; P<0.001). Although Prednisone did increase serum HCV RNA at day-4 (n = 10; change from baseline = −42,786 IU/ml; 95% CI −85,500 to −15,700; P = 0.049), the log10 HCV RNA titers were statistically not different from baseline day-0.\n\nABSTRACT.CONCLUSION/SIGNIFICANCE:\nThis is the first documentation that Pioglitazone decreases the serum HCV RNA titers independently of PEG-Interferon-α2/ribavirin treatment. The novel findings of our Study provide the foundation for basic and clinical investigations on the molecular mechanisms responsible for the Pioglitazone-induced decrease in HCV genotype 4 RNA titers.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov NCT01157975\n\n\nBODY.INTRODUCTION:\nChronic hepatitis C viral (HCV) infection markedly increased the risk for type 2 diabetes in cohorts with recognized diabetes risk factors [1]. A meta-analysis also found that HCV infection is associated with excess type 2 diabetes risk [2]. Moreover, IR is a specific feature of chronic HCV genotypes 1 and 4 infections since its prevalence is ∼7-fold higher among these patients compared to chronic Hepatitis B-infected patients [3], and changes in IR occur among patients with chronic HCV infection during and after treatment of HCV with PEG-interferon-α and ribavirin [4]. In patients with genotypes 1, 2 and 3 increased IR was associated with decreased sustained virological response (SVR) [5]. Among patients infected with genotype 1, decreases in Homeostasis Model Assessment (HOMA)-IR correlated with obtaining an SVR and increases in HOMA-IR were observed in patients who relapsed [6]. Similarly, IR may correlate with the virological response to standard of care treatment of HCV genotype 4 patients in Egypt [7]. Because Pioglitazone, a peroxisome proliferator-activated receptor (PPAR)-γ agonist, improves insulin sensitivity [8] and it is also effective in controlling liver inflammation in subjects with non-alcoholic steatohepatitis (NASH) [9], it has been used as triple therapy in addition to PEG-interferon-α and ribavirin to treat HCV-infected patients. The study in HCV genotype 1 non-responders was terminated when none of the five patients responded to the triple therapy at week 12 [10]. In contrast, Pioglitazone improved virological response to PEG-interferon α-2b/ribavirin combination therapy in hepatitis C genotype 4 treatment naïve patients with insulin resistance [11]. However, the mechanisms by which Pioglitazone influenced SVR in treatment naïve patients with HCV genotype 4 infection remain unknown. It is unknown whether Pioglitazone modifies the bioavailability of PEG-interferon α-2b/ribavirin or directly or indirectly affects HCV infection and replication. Therefore, we designed a Pilot Study to assess whether Pioglitazone decreases HCV viral load in overweight HCV genotype 4, treatment naïve patients independently of standard of care PEG-interferon α-2b/ribavirin treatment. A secondary end-point was to analyze whether Prednisone, a drug that induces insulin resistance in susceptible patients [12] and stimulates HCV viral entry and replication in replicon culture systems [13], increases HCV viral load in this population. There were no data available on the effect of Pioglitazone or Prednisone on HCV RNA viral load in patients. Here, we provide the first documentation that Pioglitazone decreases the serum HCV genotype 4 RNA titers independently of treatment with PEG-Interferon-α2/ribavirin. Prednisone did increase serum HCV RNA viral load at day-4 in this cohort.\n\nBODY.METHODS:\nThe protocol for this trial and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1.\n\nBODY.METHODS.PARTICIPANTS:\nEligible participants were all overweight (BMI>25 kg/m2), given their susceptibility for IR, adults age 25 to 65 years, with active HCV genotype 4 chronic infection (>10,000 IU/ml viral load) who had a liver biopsy within two years of recruitment (added to the protocol), were treatment naïve (never treated for HCV infection) and met the eligibility criteria for the Study. Exclusion criteria included cirrhosis, the presence of other liver diseases, diabetes, pregnancy, significant clinical co-morbidities, and the inability to provide informed consent. Participants were recruited from July 2010 to February 2011 at the primary care referral of subjects of a disadvantaged social-economical group at the Agouza Hospital in Giza, Egypt. The estimated prevalence of Hepatitis C in Egypt is 12% in adults.\n\nBODY.METHODS.ETHICS:\nWritten informed consent was obtained by the medical staff before entry into the Study. The study was registered at the clinicaltrials.gov website (NCT-01157975) and received Human Subjects' Research approval at the University of California, San Diego (UCSD Human Research protections Program) and at the Agouza Hospital, Giza, Egypt (Director's Research Board) before starting the study. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki.\n\nBODY.METHODS.OBJECTIVES:\nBecause the objective of the Study was to determine whether Pioglitazone and Prednisone treatment would affect the HCV RNA viral load (decrease and increase it, respectively), and these were not a treatment of the HCV infection in this situation, we decided not to use a placebo since we utilized de-identified samples to blinding the investigators analyzing the outcomes.\n\nBODY.METHODS.INTERVENTIONS:\nEligible subjects were randomly assigned to receive orally Pioglitazone 30 mg/day for 14 days or Prednisone 40 mg/day for 4 days. These medications were taken in the morning. Visits were at screening (up to 30 days before randomization); randomization (day -2); baseline visit (day 0); and follow-up visits for the Pioglitazone group (on days 2, 4, 7 and 14) and the Prednisone group (on days 2 and 4). For safety issues subjects were continued to be seen off protocol at day-7 post-treatment. Blood samples were obtained at each visit for HCV RNA viral load and for inflammatory, adipogenic, and insulin resistance markers. In addition, a 2-hr oral 75 g glucose tolerance test (OGTT) was performed after a 10-hr fast at day-0 before administration of the medication and at the last day of treatment (day-14 for Pioglitazone and day-4 for Prednisone). On all other visits, patients were also fasting. Patients received the first dose of the study drug (oral Pioglitazone 30 mg or Prednisone 40 mg) after blood samples were obtained on day 0. Compliance with the medication was >90% as assessed at each visit.\n\nBODY.METHODS.OUTCOMES:\nThe primary outcome with respect to the effect of Pioglitazone in HCV genotype 4 infections was the decrease in HCV RNA viral load at day 14. The secondary outcome with respect to the effect of Prednisone in HCV genotype 4 infections was the increase in HCV RNA viral load at day 4 instead of at day 14 for safety reasons given the potential adverse events associated with longer corticosteroid treatment. Additional secondary outcome measures were insulin resistance (oral glucose tolerance test; serum insulin, C-peptide; HOMA-IR; insulin sensitivity check index (QUICKI); HOMA-B; sex hormone BG (SHBG); resistin; adiponectin) and liver injury (alanine aminotransferase (ALT); aspartate aminotransferase (AST)) and inflammation (serum interleukin (IL)-8, IL-28B, tumor necrosis factor (TNF), IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, monocyte chemokine protein (MCP) and hepatocyte growth factor (HGF)). De-identified blood samples from all the enrolled subjects that received the treatments were analyzed for the standard clinical laboratory tests and for HCV RNA viral load using PCR amplification (COBAS, TACKMAN) (lowest detectable limit: 50 IU/ml). A multiplex peptide detection system (Quansys Biosciences, Logan, UT) was utilized according to the manufacturer's protocol to determine inflammatory, adipogenic, and insulin resistance markers. Values calculated from individual pixels using the Q-View Imager system. An enzyme-linked immunosorbent assay kit was used to determine human adiponectin according to the manufacturer's protocol (BioVision, Mountain View, CA). Surrogate indicators of insulin resistance and β-cell function were analyzed as described previously [14]. The indicators of insulin resistance HOMA1-IR and QUICKI were determined. HOMA1-IR = (FSI×FSG)/22.5 where FSI is fasting serum insulin concentration (μU/l) and FSG is fasting serum glucose (mmol/l). QUICKI = 1/[log (fasting insulin-μU/ml)+log (fasting glucose- mg/dl)]. β-Cell function was assessed by the HOMA –B 20.I0/(G0 -3.5); I0 = fasting insulin in μU/ml and G0 = fasting glucose in mmol/l [14].\n\nBODY.METHODS.SAMPLE SIZE:\nThere were no data available on the effect of Pioglitazone or Prednisone on HCV RNA viral load. However, we considered a reduction of 25% in HCV RNA viral load in 14 days to be relevant as proof of principle of the effects of Pioglitazone on HCV RNA titers. To detect a reduction of 25% in HCV RNA in the Pioglitazone group at day-14 assuming a standard deviation of 20% with a two-sided 5% significance level and a power of 90%, a sample size of 9 patients per group completing the study was necessary, and it was decided to enroll at least 19 subjects per group given an anticipated dropout rate of 50% mainly prior to dosing. Because the cultural environment (unwillingness to tell the research staff that they cannot participate in the study) and the social conditions (such as severe problems with transportation and employer's authorization) were expected to prevent a significant proportion of patients to attend the Clinic for follow-up visits after randomization, a larger number of eligible subjects were randomized to achieve the desirable number of subjects completing the Study. To recruit this number of patients a 12-month inclusion period was anticipated. No interim analysis was performed during the trial. An intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication (10 patients in each group was optimal based on sample size and power analysis).\n\nBODY.METHODS.RANDOMIZATION—SEQUENCE GENERATION:\nParticipants were randomly assigned following simple randomization procedures (computerized random numbers) to one of two treatment groups. Independent pharmacists dispensed either Pioglitazone or Prednisone according to a computer generated randomization list.\n\nBODY.METHODS.RANDOMIZATION—ALLOCATION CONCEALMENT:\nThe decision to accept or reject a participant was made, and informed consent was obtained from the participant, in ignorance of the next assignment in the sequence.\n\nBODY.METHODS.RANDOMIZATION—IMPLEMENTATION:\nBlock randomization was by a computer generated random number list prepared by a pharmacist with no clinical involvement in the trial. After the research nurse had obtained the subject's consent, she contacted the pharmacist who was independent of the recruitment process for allocation consignment.\n\nBODY.METHODS.BLINDING:\nWhereas patients and physicians allocated to the intervention groups were aware of the allocated arm, outcome assessors and data analysts of primary and secondary end-points were kept blinded to the allocation.\n\nBODY.METHODS.STATISTICAL METHODS:\nAn intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication. The primary and secondary endpoints were assessed with a paired-sample t-test. We also explored correlations between IL8 and TNFα using Pearson's correlation coefficient with 95% confidence intervals. The significance level was fixed at α = 5% for all tests. All analyses were carried using R version 2.12.2 (2011 Vienna, Austria; http://www.R-project.org).\n\nBODY.RESULTS.PARTICIPANT FLOW:\nThe optimal subjects' recruitment number was pre-determined by sample size and power analysis as described above. One hundredth and five consecutive patients with chronic Hepatitis C genotype 4 referred to the Hepatology Clinic were assessed for eligibility (\nFig. 1\n). Among these 105 patients, 66 were excluded because they did not meet the eligibility criteria (42 subjects), they refused to participate (8 subjects), or for other reasons (16 subjects). Therefore, 39 patients were randomized; 20 to the Pioglitazone group and 19 to the Prednisone group. Ten patients in each group received the allocated intervention, while 10 patients in the Pioglitazone group and 9 patients in the Prednisone group were unable to participate because of their inability to attend the Clinic visits. These patients did not receive the allocated intervention (\nFig. 1\n). The randomized groups that received the allocated intervention were balanced in regards to age, gender, HCV genotype and titers and the severity of their liver disease (\nTable 1\n). 10.1371/journal.pone.0031516.g001Figure 1Participant Flow.Patients with chronic Hepatitis C genotype 4 referred to the Hepatology Clinic were assessed for eligibility as described in Methods. 10.1371/journal.pone.0031516.t001 Table 1 Baseline demographic and clinical characteristics of treated subjects. Pioglitazone (N = 10) Prednisone (N = 10) \n P value \n \n Age (years) 44 (9.3) 42(10.4) 0.592 \n Sex (male) 7 (70%) 8 (80%) 0.429 \n Ethnic origin: Arab \n 10 (100%) 10 (100%) 1.000 \n Height (cm) 173 (5.8) 175(8.7) 0.458 \n Weight (kg) 88.3 (9.2) 85.7(9.9) 0.550 \n BMI (kg/m 2 ) 29.5(1.5) 27.9(1.6) 0.027 \n HCV Genotype 4 \n 10 (100%) 10 (100%) 1.000 \n HCV RNA (log 10 ) 5.41(0.80) 5.10(0.92) 0.432 \n Albumin (g/dl) 3.8(0.3) 3.8(0.3) 0.939 \n Total bilirubin (mg/dl) 0.97(0.15) 0.90 (0.16) 0.319 \n ALT (IU/ml) 63 (23) 53 (14) 0.283 \n AST (IU/ml) 54 (17) 52 (11) 0.795 \n Metavir score: \n \n A (inflammation) A1(n:5) A2(n:5) A1(n:5) A2(n:5) 1.000 \n F (fibrosis) F1(n:7); F2(n:3) F1(n:6); F2(n:4) 0.639 Data are means (SD) or numbers (%). The patients that did receive the allocated intervention were similar in demographics, ethnicity and degree of liver disease to those that did not receive the allocated intervention (\nTable 2\n). By chance allocation, the serum ALT was lower in the group that received Prednisone than in the group that did not received the allocated Prednisone (53 vs 69 IU/ml; P = 0.012) (\nTables 1\n\nand\n\n2\n). 10.1371/journal.pone.0031516.t002 Table 2 Baseline demographic and clinical characteristics of non-treated subjects. Pioglitazone (N = 10) \n P value \n Prednisone (N = 9) \n P value \n \n Age (years) 39 (4.2) 0.139 43(11.2) 0.842 \n Sex (male) 6 (60%) 0.660 5 (55%) 0.277 \n Ethnic origin: Arab \n 10 (100%) 1.000 9 (100%) 1.000 \n Height (cm) 172 (5.9) 0.706 177(8.1) 0.612 \n Weight (kg) 85.7 (7.6) 0.499 84.8(8.8) 0.837 \n BMI (kg/m 2 ) 28.8(2.1) 0.402 28.2(3.7) 0.818 \n HCV Genotype 4 \n 10 (100%) 1.000 9 (100%) 1.000 \n ALT (IU/ml) 68 (10) 0.536 69 (10) 0.012 \n AST (IU/ml) 63 (12) 0.188 57 (8) 0.278 \n Metavir score: \n \n A (inflammation) A1(n:4); A2(n:6) 0.673 A1(n:4); A2(n:5) 0.821 \n F (fibrosis) F1(n:5); F2(n:5) 0.388 F1(n:5); F2(n:4) 0.855 Data are means (SD) or numbers (%). P values are for comparisons between treated and untreated subjects within the same arm. No participant was excluded after randomization because they were found not to meet eligibility criteria. All the patients allocated to the interventions (10 in each arm) completed the study (\nFig. 1\n).\n\nBODY.RESULTS.BASELINE DEMOGRAPHIC AND CLINICAL CHARACTERISTICS:\nAn intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication; all of these patients had a subsequent efficacy observation (10 patients in each group). As outlined in \nTable 1\n, all patients were middle age Egyptian Arabs, predominantly males that had HCV genotype 4 with mild or moderate inflammation (A1 and A2, respectively) and mild or moderate fibrosis (F1 and F2, respectively) on their liver biopsy. They had normal albumin and total bilirubin and mild increase in ALT/AST. All subjects were overweight and five were obese (body mass index, BMI; the weight in kg divided by the square of the height in meters was >29 kg/m2: 3 in the Pioglitazone group and 2 in the Prednisone group). By chance allocation, the BMI was higher in the group receiving Pioglitazone than in the group receiving Prednisone (29.5 vs 27.9 kg/m2; P = 0.027) (\nTable 1\n). There was no indication that significant chance imbalances had occurred between the two groups. Further, no direct analysis between the two groups was intended for the primary or secondary end points. Rather, a comparison between baseline and post-treatment results within each group was the Pilot Study's intent.\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON SERUM HCV RNA TITERS:\nPioglitazone decreased serum HCV RNA titers in overweight HCV genotype 4, treatment naïve patients. In the Pioglitazone-treated group, the log10 transformed HCV RNA titers were statistically different from baseline day-0 (5.41+/−0.80) on day-14 (5.24+/−0.79; P<0.001) and day-7 (5.30+/−0.78; P = 0.009) (\nTable 3\n), providing evidence that Pioglitazone was effective in decreasing HCV RNA in this cohort of patients. Pioglitazone treatment decreased the HCV RNA from baseline in 9 out of 10 subjects at day-7. The log10 HCV RNA values were also decreased by day-4 but they were statistically not different from baseline (5.38+/−0.78; P = 0.059). The log10 HCV RNA values were not affected at day-2 by Pioglitazone treatment. Further, the decrease in the log10 HCV RNA between day-7 and day-14 was highly significant (mean change = 0.063; 95% CI, 0.045; P<0.001) (\nTable 3\n). 10.1371/journal.pone.0031516.t003 Table 3 Effects of Pioglitazone on serum HCV RNA titers. Day HCV (IU/ml ×10 −3 ) \n P value \n \n 95% CI \n HCV log 10 \n \n P value \n \n 95% CI \n \n 0 \n 608.2(696.4) 5.41(0.80) \n 2 \n 586.9(663.9) 0.178 −11.5, 53.9 5.41(0.78) 0.854 −0.029 \n 4 \n 531.5(553.6) 0.151 −33.6, 186.9 5.38(0.78) 0.077 −0.002 \n 7 \n 455.3(504.4) 0.044 4.8, 300.7 5.30(0.78) 0.001 −0.156 \n 14 \n 402.5(452.4) 0.029 26.6, 384.6 5.24(0.79) <0.001 −0.215 Values are mean (SD) of HCV RNA titers determined as described in Methods . In spite of the variable baseline HCV RNA viral load, Pioglitazone also decreased significantly the absolute HCV RNA levels at day-14 (mean change = 205,618 IU/ml; P = 0.028) and at day-7 (mean change = 152,772 IU/ml; P = 0.044) (\nTable 3\n).\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON INDICES OF INSULIN RESISTANCE:\nPioglitazone treatment decreased by day-14 fasting serum glucose (Glucose-0) (mean change = 8.2 mg/dL; P = 0.024) as well as the 2-hr glucose (Glucose-120 min) after an OGTT (mean change = 18.8 mg/dL; P = 0.001) (\nTable 4\n). 10.1371/journal.pone.0031516.t004 Table 4 Effects of Pioglitazone on glucose homeostasis, insulin resistance and secretion. Test Day-0 Day-14 \n P value \n \n 95% CI \n \n Glucose-0 (mg/dL) 105 (11.4) 96.8 (6.8) 0.024 1.3, 15.1 \n Glucose-120 (mg/dL) 120.6 (11.9) 101.8 (7.4) 0.0004 10.9, 26.7 \n HOMA-IR \n 2.71 (1.61) 2.38 (1.60) 0.326 −0.95 \n QUICK-I \n 0.339 (0.031) 0.348 (0.036) 0.587 \n …… \n \n HOMA-B \n 92.65 (57.5) 115.26 (90.85) 0.52 −99.6, 54.4 \n Insulin (µIU/ml) 10.4 (6.2) 10.1 (7.0) 0.914 −6.2, 6.8 \n C-peptide (nmol/L) 6.4 (9.4) 9.6 (8.5) 0.278 \n − 9.5, 3.1 \n SHBG (µg/ml) 3.8 (2.4) 1.7 (2.1) 0.062 −140 \n Adiponectin (µg/ml) 3.19 (1.32) 3.64 (1.71) 0.512 \n ……‥ \n \n Resistin (ng/ml) 2,0 (0.2) 1,7 (0.6) 0.116 −2.0 Values are mean (+/− SD). Tests were performed as described in Methods . However, Pioglitazone treatment did not affect by day-14 the following surrogate indicators of insulin resistance and secretion: HOMA-IR; QUICK-I; and HOMA-B. In addition, other surrogate indicators of insulin resistance were not affected by Pioglitazone treatment on day-14, including fasting serum insulin; C-peptide; SHBG; resistin; and adiponectin (\nTable 4\n).\n\nBODY.RESULTS.EFFECT OF PIOGLITAZONE ON INDICATORS OF LIVER INJURY AND INFLAMMATION:\nElevated serum ALT and AST values are surrogate indicators of liver injury used in patient care and clinical research [9]. Pioglitazone treatment decreased at day-14 serum ALT levels (mean change = 7.5 IU/ml; 95% CI: 2.1 to 12.9; P = 0.012) and to a smaller degree serum AST levels (mean change = 2.6 IU/ml; 95% CI: −0.6 to 5.8; P = 0.096) (\nTable 5\n). 10.1371/journal.pone.0031516.t005 Table 5 Effects of Pioglitazone on liver injury indicators. Day ALT (IU/L) \n P value \n \n 95% CI \n AST (IU/L) \n P value \n \n 95% CI \n \n 0 \n 62.7 (23.1) 53.6 (15.6) \n 14 \n 55.2(18.7) 0.0118 2.1, 12.9 51.0 (14.0) 0.0963 −0.6, 5.8 Values are mean (+/− SD). ALT and AST were determined as described in Methods . It has been suggested that serum IL-8 values are elevated in HCV infection and that these values may reflect the effects of the HCV on il-8 gene transcription [15]. Pioglitazone increased serum IL-8 levels at day-14 when compared to baseline day-0 values (mean change = −166.8 IU/ml; 95% CI: −300.1 to −33.5; P = 0.019) (\nTable 6\n). Given the possible induction of IL-8 expression by TNF-α [15], we analyzed post-hoc their correlation by a Pearson's correlation coefficient. We found that there was a relatively strong correlation between serum IL-8 and serum TNF-α levels for day-0 and day-14 (n = 20; r = 0.64; 95% CI = 0.28 to 0.85; P = 0.002). 10.1371/journal.pone.0031516.t006 Table 6 Effects of Pioglitazine on serum cytokines. Serum Cytokine Day-0 Day-14 \n P value \n \n 95% CI \n \n IL-8 (pg/ml) 181.0 (182.8) 347.8 (123.9) 0.020 −300, −33.5 \n IL-28B (pg/ml) 837.2 (1,068) 1,597 (955.5) 0.060 −1,564, 47 \n TNF-α (pg/ml) 92.9 (139.6) 275.5 (268.5) 0.056 −371.7, 6.6 \n IL-1α (pg/ml) 136.6 (324.4) 223.6 (452.1) 0.415 −317, 143 \n IL-1β (pg/ml) 627.4 (1,046) 724.0 (1,098) 0.748 −760, 566 \n IL-2 (pg/ml) 13.9 (11.1) 16.1 (10.1) 0.632 −12.3, 7.9 \n IL-4 (pg/ml) 10.9 (4.9) 10.4 (4.3) 0.817 −4.8, 5.9 \n IL-6 (pg/ml) 337.9 (454.8) 560.5 (482.1) 0.280 −661, 216 \n IL-10 (pg/ml) 32.4 (58.2) 27.6 (44.6) 0.831 −44.7, 54.3 \n IFN-γ (pg/ml) 5.7(4.2) 7.4 (4.6) 0.403 −5.9, 2.6 \n MCP-1 (pg/ml) 338.4(151.4) 330.6(157.9) 0.915 155, 170 \n HGF (pg/ml) 1008 (736) 975 (603) 0.914 −665, 732 Values are mean (+/− SD). Serum cytokines were determined as described in Methods . We found no statistically significant differences between day-0 and day-14 in serum IL-28B, TNF-α, IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, MCP-1 and HGF (\nTable 6\n).\n\nBODY.RESULTS.EFFECT OF PREDNISONE ON SERUM HCV RNA:\nPrednisone increased the absolute HCV RNA level at day-2 (mean change = −35,636 IU/ml; 95% CI −64,665 to −6,607; P = 0.021) and at day-4 (mean change = −42,786 IU/ml; 95% CI −85,556 to −15.7; P = 0.049) (\nTable 7\n), providing some evidence that Prednisone may be effective in increasing HCV RNA in this cohort of patients for that short period of time. When compared to their baseline day-0 values, eight of 10 subjects had an increase in HCV RNA at day-4. Prednisone treatment also increased HCV RNA from baseline day-0 in 9 of 10 subjects at day-2 (\nTable 7\n). However, in the Prednisone-treated group, the HCV RNA titers expressed as log10 values on day-2 and day-4 were not statistically different from baseline day-0 (\nTable 7\n). 10.1371/journal.pone.0031516.t007 Table 7 Effects of Prednisone on serum HCV RNA titers. Day HCV (IU/ml ×10 −3 ) \n P value \n \n 95% CI \n HCV log 10 \n \n P value \n \n 95% CI \n \n 0 \n 498.0(805.3) 5.10(0.92) \n 2 \n 533.7(834.4) 0.0215 −64.6,−6.6 5.14(0.94) 0.059 0.08 \n 4 \n 540.8(855.2) 0.0499 −85.5,−15.7 5.13(0.92) 0.18 0.03 Values are mean (SD). HCV RNA titers were determined as described in Methods . \n\nBODY.RESULTS.EFFECT OF PREDNISONE ON INDICES OF INSULIN RESISTANCE:\nPrednisone treatment increased by day-4, when compared to baseline day-0 values, fasting serum glucose (Glucose-0) (mean change = 9.0 mg/dL; 95% CI: 20.4 to −2.4; P = 0.108) as well as the 2-hr (Glucose-120 min) after an OGTT (mean change = 11.0 mg/dL; 95% CI: 17.9 to 4.1; P = 0.006). Collectively, the CI and t-tests suggest that Prednisone increased serum glucose levels at day-4 (\nTable 8\n). However, Prednisone treatment did not affect by day-4 other surrogate indicators of insulin resistance and secretion (\nTable 8\n). 10.1371/journal.pone.0031516.t008 Table 8 Effects of Prednisone on glucose homeostasis, insulin resistance and secretion. Test Day-0 Day-4 \n P value \n \n 95% CI \n \n Glucose-0 (mg/dL) 88.0 (14.7) 97.0 (12.9) 0.107 −20.4, 2.4 \n Glucose-120 (mg/dL) 96.4 (10.5) 107.4 (12.3) 0.0055 −17.9, −4.1 \n HOMA-IR \n 1.73 (1.06) 2.55(1.96) 0.162 −1.06 \n QUICK-I \n 0.358(0.022) 0.345 (0.032) 0.275 \n HOMA-B \n 95.48 (49.35) 115.86 (95.96) 0.623 −20.38 \n Insulin (µIU/ml) 7.5(3.9) 10.5 (8.2) 0.374 −10.2, 4.4 \n C-peptide (nmol/L) 11.3 (8.9) 4.1 (1.3) 0.061 −0.5, 14.9 \n SHBG (µg/ml) 3.4(2.5) 2.7 (2.2) 0.591 −2.4 \n Adiponectin (µg/ml) 5.9 (8.2) 2.6 (2.1) 0.306 −4.5 \n Resistin (ng/ml) 3.4 (2.5) 2.7 (2.2) 0.591 −2.4 Values are mean (+/− SD). Tests were performed as described in Methods . \n\nBODY.RESULTS.EFFECT OF PREDNISONE ON INDICATORS OF LIVER INJURY AND INFLAMMATION:\nPrednisone treatment did not affect at day-4 either serum ALT, AST, IL-8 or other inflammatory mediators (\nTables 9\n\n and \n\n10\n). 10.1371/journal.pone.0031516.t009 Table 9 Effects of Prednisone on liver injury indicators. Day ALT (IU/L) \n P value \n \n 95% CI \n AST (IU/L) \n P value \n \n 95% CI \n \n 0 \n 53.2 (14.4) 52.0(11.3) \n 4 \n 55.7 (15.7) 0.080 5.4, −0.4 52.2 (12.8) 0.896 3.6, −3.2 Values are mean (+/− SD). ALT and AST were determined as described in Methods . 10.1371/journal.pone.0031516.t010 Table 10 Effects of Prednisone on serum cytokines. Serum Cytokine Day-0 Day-4 \n P value \n \n 95% CI \n \n IL-8 (pg/ml) 123.1 (163.5) 159.5 (185.1) 0.353 −123, 50 \n IL-28B (pg/ml) 485.9(876.4) 610.8 (857.7) 0.120 −292, 42 \n TNF-α (pg/ml) 146.6 (268.8) 146.1 (284.4) 0.996 −275, 276 \n IL-1α (pg/ml) 80.1 (208.3) 19.2(38.3) 0.450 −119, 240 \n IL-1β (pg/ml) 92.7 (120.3) 144.8 (293.0) 0.682 −348, 244 \n IL-2 (pg/ml) 11.8 (9.8) 11.3 (9.6) 0.742 −3.1, 4.2 \n IL-4 (pg/ml) 19.3 (19.1) 10.9 (7.2) 0.329 −10.5, 27.3 \n IL-6 (pg/ml) 292.8 (445.7) 359.9 (501.4) 0.492 −282, 149 \n IL-10 (pg/ml) 67.4 (106.8) 30.8 (51.2) 0.144 −16.1, 89.4 \n IFNγ (pg/ml) 13.9 (24.9) 4.9 (4.3) 0.342 −11.9, 29.9 \n MCP-1 (pg/ml) 232.7 (126.1) 266.8 (112.8) 0.469 −139, 71 \n HGF (pg/ml) 1345 (800) 1022 (681) 0.483 −711, 1357 Values are mean (+/− SD). Serum cytokines were determined as described in Methods . \n\nBODY.RESULTS.ADVERSE EVENTS:\nThere were no serious adverse events in either treatment group. Mild or moderate adverse events unrelated to the allocated treatment occurred in 20% of subjects in the Prednisone group and in 30% of the subjects in the Pioglitazone group. There were no significant changes from baseline during treatment in safety laboratory tests in either treatment group.\n\nBODY.DISCUSSION:\nIt was not known whether Pioglitazone and Prednisone affect serum HCV genotype 4 RNA titers independently of PEG-Interferon-α2/ribavirin treatment. This Study documented for the first time that Pioglitazone (30 mg daily) decreases the serum HCV RNA titers after a 14-day treatment in overweight subjects with treatment naïve, chronic HCV genotype 4 infections. The statistical differences between the mean at day-0 (baseline) and day-14 were highly significant whether the serum HCV RNA was analyzed as IU/ml or log10 values (\nTable 3\n). Pioglitazone treatment also decreased the HCV genotype 4 titers on day-4 and day-7, suggesting a rapid effect on HCV RNA titers. The comparable mean decrease in HCV RNA for the first week and for the second week indicates a continuing effect during the 14-day treatment. It remains to be studied whether the rate of decrease of serum HCV RNA titers continues after day-14 in the Pioglitazone group. If the effects of Pioglitazone on in HCV genotype 4 RNA titers would continue at the same rate as for the first 2 weeks, it would achieve a clinically significant decrease of ∼1 log10 in HCV RNA titers by week-12, an important predictive timeline for a sustained virological response in HCV treatments [16]. Given the relative safety of Pioglitazone administration [9],[17], it could be, as a co-adjuvant, a valuable therapeutic effect for difficult to treat HCV genotype 4-infected patients. This Study also documented for the first time that Prednisone (40 mg daily) increases the serum HCV RNA titers after a 4-day treatment of HCV genotype 4 overweighed patients (\nTable 7\n). Our results with Prednisone treatment are congruent with the recently reported increased HCV replication by corticosteroids in the Huh-7 replicon system [13], but whether in HCV genotype 4-infected patients, this reflects a direct corticosteroid effect on HCV replication, an indirect effect mediated by insulin resistance, or another mechanism is unknown. Because the Huh-7 replicon system uses de-differentiated, tumoral liver cells that have higher expression of glucose metabolism enzymes (glucose-6-phosphate 1-dehydrogenase and isocitrate dehydrogenase) and of the mitochondrial dicarboxylate carrier [18], it may not be suitable for the evaluation of interaction between glucose homeostasis and HCV replication. To minimize potential adverse events associated with the administration of Prednisone, its use was limited to 40 mg/day for a 4-day treatment, a relatively safe and frequent standard of care practice for some clinical conditions [19]. It remains to be determined whether a more prolonged corticosteroid treatment in HCV-infected patients has detrimental effects on RNA titers of different HCV genotypes. The subjects' recruitment number was pre-determined by the sample size and power analysis. An appropriate sample size was utilized in this study, which achieved the primary and secondary end-points with a relatively small cohort of patients. If the sample size of the study were unnecessarily large, then the risks to the subjects would have been excessive according to the CONSORT statement [20]. An intention-to-treat analysis was conducted in patients who had received ≥1 dose of study medication; all of these subjects completed the study and had a subsequent efficacy observation. Because these randomized arms were not treatments of the HCV infection in this Study, and because during chronic infection, the level of serum HCV RNA is in a steady state with only minor fluctuations in untreated patients [21], instead of using demanding placebo arms, we utilized randomization as well as de-identified samples to blinding the investigators analyzing the outcomes. These approaches minimize biases in executing the study and in analyzing the outcomes according to the CONSORT statement [20]. Triple combination therapy of Pioglitazone/PEG-Interferon-α2/ribavirin has been used for previously treated HCV genotype 1 infection unsuccessfully in Switzerland [10] and in the USA [22], [23] or for treatment naive HCV genotype 4 infection successfully in Egypt [11]. It is unknown whether the beneficial effects of Pioglitazone together with PEG-Interferon-α2/ribavirin are circumscribed to HCV genotype 4 or to treatment naïve HCV of various genotypes. Although, the relatively low 15 mg Pioglitazone dose used in the INSPIRED-HCV trial in Switzerland [10] could have been insufficient to achieve efficacy, higher Pioglitazone doses (30 mg or 45 mg per day) have been utilized in HCV genotype 1 infection trials together with PEG-Interferon-α2/ribavirin without improving the sustained virological response [22], [23]. These higher Pioglitazone doses (30 mg or 45 mg per day) have been successfully utilized to prevent the development of type 2 diabetes in patients with impaired glucose tolerance in the USA [8], in the treatment of HCV genotype 4 treatment naive patients together with PEG-Interferon-α2/ribavirin in Egypt [11] and in our Study. Further, the known effects of Pioglitazone and related PPARγ agonists on glucose homeostasis [8],[12] and as documented in our Study, and their effects on serum HCV RNA titers (\nTable 3\n) suggest, but does not prove, that improving insulin sensitivity lowers HCV RNA titers. This has been proposed by Harrison on the basis that patients with QUICK-I values <0.35 (equivalent to HOMA-IR>1.8) have significantly higher HCV titers than patients with QUICK-I values >0.35 [24]. Further, treating patients with hepatitis C genotype 1 and insulin resistance using metformin improves insulin sensitivity and increases SVR rate in patients who reached HOMA lower than 2 at week 24 of therapy and in women, in whom the therapy doubled the SVR rate [25]. The findings of our Study support the notion that sequential treatment with Pioglitazone and PEG-Interferon-α2/ribavirin rather than concomitant administration should be considered in the treatment of chronic HCV genotype 4 infections, as suggested by Serfaty and coworkers [26]. The beneficial effects of Pioglitazone treatment on glucose homeostasis have been well documented in patients with impaired glucose tolerance [8], and it was also reported in triple combination therapy of Pioglitazone/PEG-Interferon-α2/ribavirin in patients with HCV genotype 1 or 4 [10],[11]. In our Study, Pioglitazone treatment decreased by day-14 and Prednisone increased by day-4 fasting serum glucose as well as the 2-hr glucose after an OGTT (\nTable 4\n\n and \n\n8\n\n). However, when compared to baseline day-0 values, neither Pioglitazone treatment at day-14 nor Prednisone treatment at day-4 affected HOMA-IR; QUICK-I; and HOMA-B, surrogate indicators of insulin resistance and secretion (\nTables 4\n\n and \n\n8\n). It would probably require a more prolonged treatment with Pioglitazone and Prednisone to induce changes in these surrogate indicators of insulin resistance and secretion [26]. Elevated serum ALT and AST values are surrogate indicators of liver injury used in patient care and clinical research [9]. Pioglitazone treatment decreased at day-14 serum ALT levels and to a smaller degree serum AST levels (\nTable 5\n). Additional insights into the mechanisms of Pioglitazone on liver injury in chronic HCV infection would require assessment of liver biopsies before and after prolonged treatment since the effects on serum ALT were clinically modest at day-14. Prednisone treatment did not affect at day-4 serum ALT or AST levels (\nTable 9\n). Many experimental and clinical studies have shown that PPARγ signaling has anti-inflammatory properties in the liver [27]. Interestingly, HCV infection in Huh-7 cells and in humans inhibits PPARγ expression and function [27]. Two studies suggested therapeutic effects of bezafibrate, a non-selective agonist of PPAR α/δ/γ, during chronic HCV infection. Patients with HCV genotype 1 and 2 unresponsive to interferon therapy were treated for 8 weeks or 6 months with bezafibrate [28], [29]. At endpoint, HCV viral load (−0.5 log10) and ALT were statistically significantly decreased, suggesting anti-inflammatory and antiviral activities of the PPAR α/δ/γ signaling pathways in patients with chronic HCV infection. Given the non-selective nature of the PPAR α/δ/γ agonist, it is difficult to determine whether induction of PPARγ pathways occurred and played a role in the decreased HCV RNA. Pioglitazone also has shown improvement in liver injury and inflammation in patients with non-alcoholic steatohepatitis [9] and this has been attributed to the beneficial effects of Pioglitazone on insulin signaling [8]. An alternative explanation is a possible direct effect of Pioglitazone either as a PPARγ agonist or through other mechanisms on liver injury, independently of its insulin sensitizing actions [8]. Indeed, PPARγ agonists induce alternative, anti-inflammatory M2 macrophage activation [30], possibly linking insulin sensitivity with decreased liver injury in patients with HCV infection. At the molecular level, insulin resistance is promoted by a transition from an anti-inflammatory macrophage M2 state maintained by STAT6 and PPARγ to a pro-inflammatory macrophage M1 state driven by C/EBPβ and NF-κB activation and other transcription factors that play important roles in innate immunity [30]. Relevant to the management of HCV infected patients, liver fibrogenesis is expected to be ameliorated by PPARγ signaling [31] as well as by decreased inflammatory pathways arising from M2 macrophages [32] and resulting in decreased activation of hepatic stellate cells [33]. Pioglitazone increased serum IL-8 levels at day-14 when compared to baseline day-0 values (\nTable 6\n). Serum IL-8 values are elevated in HCV infection [34] and these values may reflect the effects of the HCV on il-8 gene transcription as well as its induction by TNFα [15]. We found that there was a relatively strong correlation between serum IL-8 and serum TNF-α levels for day-0 and day-14. Although it has been suggested that elevated levels of serum IL-8 may be associated with HCV resistance to Interferon-α2 therapy, the clinical or biological significance of the elevated serum IL-8 in HCV genotype 4 patients treated with Pioglitazone remains to be determined. Prednisone did not affect serum IL-8 levels at day-4 when compared to baseline day-0 values (\nTable 10\n). We found no differences between day-0 and day-14 in the Pioglitazone group and day-4 in the Prednisone group in other inflammatory markers such as serum IL-28B, TNF-α, IL-1α, IL-1β, IL-6, IL-8, IL-10, IFN-γ, MCP-1 and HGF (\nTables 6\n\n and \n\n10\n). The lack of changes in IL-28B (IFN-γ3) is relevant since genetic variation in IL-28B affects the spontaneous clearance of HCV in patients [35]. Our study has some limitations: i] there were, as expected, clinically modest decreases in HCV RNA at day-14 in the Pioglitazone group; ii] the relatively small cohort could introduce potential biases; and iii] the findings may not be applicable for patients with HCV genotype 4 in other ethnic groups It remains to be elucidated whether the effects of Pioglitazone also occur in HCV genotype 4 infections in previously treated patients. The novel findings of our Study provide the foundation for basic and clinical investigations on the molecular mechanisms responsible for the Pioglitazone-induced decrease in HCV genotype 4 RNA titers. Future research should characterize whether the effects of Pioglitazone on HCV genotype 4 RNA titers are mediated indirectly by insulin signaling pathways or directly by affecting HCV genotype 4 hepatocyte entry and/or replication.\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1\nCONSORT Checklist. CONSORT 2010 checklist of information to include when reporting a randomised trial.(DOC)Click here for additional data file. Protocol S1\nTrial Protocol.\n(DOC)Click here for additional data file.\n\n**Question:** Compared to baseline what was the result of Prednisone (40 mg/day for 4 days) on serum ALT, AST, IL-8 or other inflammatory mediators?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
1040
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Nicotine delivery, tolerability and reduction of smoking urge in smokers following short-term use of one brand of electronic cigarettes\n\n ABSTRACT.BACKGROUND:\nThis randomized, partially single-blinded, 6-period crossover clinical study of adult smokers compared the nicotine pharmacokinetics, impacts on smoking urge and tolerability of various formulations of one brand of e-cigarettes with that of a tobacco cigarette.\n\nABSTRACT.METHODS:\nFive e-cigarettes with different e-liquid formulations containing 1.6 % and 2.4 % nicotine and a conventional tobacco cigarette were randomized among 24 subjects under two exposure sessions consisting of a 30-min controlled and a one-hour ad lib use period to assess plasma nicotine levels, impacts on smoking urge and adverse events. The 30-min controlled use session comprised an intensive use of the e-cigarettes with a total of 50 puffs taken every 30 s for comparison to a single conventional cigarette having a typical machine-measured nicotine yield (~0.8 mg). Ad lib product use conditions provided insight into more naturalistic product use behaviors and their accompanying smoking urge reductions. Adverse events (AEs) were assessed by the Principal Investigator.\n\nABSTRACT.RESULTS:\nSignificant (p < 0.05) increases in plasma nicotine concentrations occurred within 10 min of controlled e-cigarette use and significant (p < 0.001) reductions from baseline smoking urge were observed within 5 min. E-cigarette and cigarette nicotine plasma levels were comparable for up to one hour of use. After both sessions (90 min), nicotine exposure was the highest for the cigarette, with all e-cigarettes showing 23 % to 53 % lower plasma concentrations. During controlled use, peak reduction in smoking urge for e-cigs occurred later than for the cigarette. After completion of both sessions, significant smoking urge reduction persisted for most of the tested e-cigarettes, albeit at levels lower than that provided by the tobacco cigarette. Nicotine content, vehicle differences, and the presence of menthol did not significantly affect smoking urge reduction by the e-cigarettes. No subjects were discontinued due to AEs. The most frequently reported AEs events included cough, throat irritation, headache, and dizziness.\n\nABSTRACT.CONCLUSIONS:\nBlood plasma nicotine levels obtained from short-term use of e-cigarettes containing 1.6 % and 2.4 % nicotine were significant, but lower than those of conventional tobacco cigarettes, yet the reduction in craving symptoms were broadly comparable. The types of AEs were consistent with other research studies of longer duration that have reported that use of e-cigarettes by adult smokers is well-tolerated.\n\nABSTRACT.TRIAL REGISTRATION:\nhttp://ClinicalTrials.gov identifier: NCT02210754. Registered 8 August 2014.\n\nBODY.BACKGROUND:\nSmoking remains the leading preventable cause of premature death in the United States [1]. Electronic cigarettes (e-cigarettes) are rapidly becoming a popular alternative to cigarette smoking worldwide and are garnering significant attention as potentially reduced-exposure replacements for conventional cigarettes (cigarettes) [2–6]. E-cigarettes consist of a battery, heating component, and a cartridge containing tobacco-derived nicotine in a solution composed of glycerin and/or propylene glycol (PG), and flavorings. Upon activation, the heating element heats the solution to generate an aerosol that is inhaled by the consumer in a manner that mimics smoking. Pharmacokinetic studies with early-generation e-cigarettes found that they delivered markedly lower levels of plasma nicotine than conventional cigarettes [7, 8]. More recent studies have reported that experienced subjects using later-generation e-cigarettes containing 9 to 24 mg/mL of nicotine attain significant increases in plasma nicotine concentrations over baseline values that can be similar to those obtained from conventional cigarette smoking [9–11]. These results suggest that an acclimation to the product may be necessary for naïve smokers to become familiar with the characteristics of the specific e-cigarettes in order to effectively use them in a subjectively enjoyable manner. Moreover, these studies also demonstrated a reduction in smoking urge or abstinence symptoms following e-cigarette use, even for products that deliver less nicotine than conventional tobacco cigarettes. This is quite consistent with a perspective that the substantial sensory and behavioral aspects of cigarette smoking that are mimicked by e-cigarettes may provide meaningful relief of the cigarette cravings commonly reported by abstinent smokers. Surveys and clinical studies evaluating the impacts, tolerability and adverse events (AEs) associated with e-cigarette use suggest that they are generally well-tolerated following short-term use [1, 9, 12–16]. Commonly reported AEs include symptoms such as mouth and throat irritation, light-headedness, dizziness and dry cough. The primary objective of this study was to examine the nicotine blood plasma levels and smoking urge impacts of various formulations of one brand of e-cigarette with that of a tobacco cigarette. The tested products contained different flavorings, aerosol forming excipients (i.e., propylene glycol, glycerin) and nicotine levels. The secondary objectives were to assess the tolerability and any adverse events associated with the study products following short-term use under intensive controlled use conditions, as well as under more natural ad-lib use conditions.\n\nBODY.METHODS.PARTICIPANTS:\nThe study protocol and the informed consent forms were approved by Chesapeake IRB, Columbia, MD. A total of 107 potential subjects were recruited from the Lincoln, NE area using standard advertising methods (i.e., print and radio advertisements) and from a database of subjects who had previously participated a clinical research study or who had expressed interest in participating in a study. All potential subjects were provided details regarding the study and written informed consent was obtained prior to completion of any study procedures. Sixty-one subjects were excluded for not meeting the predefined inclusion/exclusion criteria, while five subjects declined to participate prior to enrollment and three subjects were excluded because the study had reached the recruitment target of 38 eligible subjects. The 38 subjects meeting the eligibility criteria participated in a 7-day at-home lead-in period during which they used and became familiar with two of e-cigarette study products (non-menthol and menthol, 2.4 % nicotine with glycerin). All subjects who participated in the at-home lead-in period were allowed to check in at the start of clinical conduct. Two subjects chose not to participate further after the lead-in period and one subject failed further screening requirements at check-in. A final study population of 24 subjects were enrolled into the study and randomized into one of six product usage sequences with four subjects per sequence. One subject withdrew consent after completion of the first product use on Day 1 of the trial due to a personal reason (family emergency). All subjects participating in the study from the time of the lead-in period were paid for their participation. The main criteria for inclusion were as follows: healthy adult male and female smokers, 21 to 65 years of age, inclusive; smoker for at least 12 months and currently smoked an average of 10 or more manufactured cigarettes per day (no restriction on brand-style); positive urine cotinine at screening (≥ 500 ng/mL); and exhaled carbon monoxide CO > 10 ppm at screening. Exclusion criteria included: history or presence of clinically significant mental or physical health conditions; females who were pregnant or breastfeeding; high blood pressure; body mass index < 18 kg/m2 or > 40 kg/m2; acute illnesses (e.g., upper respiratory infection, viral infection) requiring treatment within 2 weeks prior to check-in; use of prescription smoking cessation treatments, anti-diabetic or insulin drugs or medications known to interact with cytochrome P450 2A6; positive urine screen for alcohol or drugs of abuse; and self-reported puffers (i.e., smokers who draw smoke from the cigarette into the mouth and throat but do not inhale). Subjects who had used any tobacco- or nicotine-containing products other than manufactured cigarettes or e-cigarettes within 28 days of in-clinic product use were also excluded.\n\nBODY.METHODS.PRODUCTS TESTED:\nA rechargeable version of an e-cigarette that is currently sold in retail outlets throughout the United States was used in this study. The rechargeable e-cigarette consists of a battery segment and a cartomizer segment comprising the heating unit and a liquid reservoir which can be separated from the battery for recharging or replacement when the e-liquid is depleted. The battery operates at a voltage of 3.7 volts (nominal) and the resistance of the heating element is approximately 3 ohms. The maximum operating temperature is dependent on both the state of reservoir fluid fill and on the manner of use and was not recorded in this study. Two commercial e-cigarette products that contained 16 mg/mL (1.6 %) USP grade nicotine were used in this study. As well, three non-commercial products that contained 24 mg/mL (2.4 %) USP grade nicotine were used in this study to evaluate various product characteristics considered important to further product development. In addition to nicotine, the e-cigarettes used in this study contained USP grade glycerin and/or propylene glycol (as described below), distilled water (<20 %), citric acid (<1.0 %) and natural or artificial flavors (<10 %). The nicotine yield of the conventional cigarette used in the study was approximately 0.8 mg per cigarette [17]. The study products included:Product A: Classic Tobacco flavored e-cigarette (2.4 % nicotine, ~75 % glycerin)Product B: Classic Tobacco flavored e-cigarette (2.4 % nicotine, ~50 % glycerin/~20 % propylene glycol)Product C: Menthol flavored e-cigarette (2.4 % nicotine, ~75 % glycerin)Product D: Classic Tobacco flavored e-cigarette (1.6 % nicotine, ~75 % glycerin)Product E: Classic Tobacco flavored e-cigarette (1.6 % nicotine, ~50 % glycerin/~20 % propylene glycol)Product F: Tobacco Cigarette\n\nBODY.METHODS.STUDY DESIGN:\nThis was a randomized, partially single-blinded, six-period crossover study conducted at a single independent research center (Celerion, Lincoln, NE). Twenty-four subjects were randomly selected from the pool of 38 subjects who participated in the at-home lead-in period were enrolled into the testing phase of the trial and randomized to assigned product sequences. The subjects checked into the clinic on Day −2 and abstained from using of nicotine-containing products until product use on Day 1. Days 1, 3, 5, 7, 9, and 11 were designated product use days while Days −1, 2, 4, 6, 8, and 10 were designated wash-out days. The subjects were housed at the test site from the time of check-in through completion of study events on Day 11. The clinic staff monitored the subjects during the confinement period to ensure that no illicit nicotine or tobacco products were used. Study investigators were not blinded, but the subjects were as the all the e-cigarette products appeared the same. The menthol product, however, was easily discernible due to taste. Previous studies have demonstrated that a lack of familiarity with e-cigarette products may result in low nicotine intake with their use [7, 18]. In order to allow study participants to become familiar with the e-cigarette devices, subjects were instructed on the appropriate use of the products and were required to demonstrate the appropriate use to clinic staff. Each subject was then provided with two units of the menthol and non-menthol 2.4 % nicotine products containing glycerin for at-home use prior to the start of the study and instructed to use the products each day during the 7-day lead-in period. Two types of exposures were utilized in the study on each product use day: a controlled use period followed by an ad lib use period. Similar 2-stage designs have been informative in prior studies evaluating the nicotine delivery and subjective effects of e-cigarettes [5–9, 19]. Enrolled subjects were randomized into one of six product usage sequences (Table 1) with four subjects per sequence.Table 1Summary of study demographics and FTCD scores by study product sequence and overallTrait/TestCategory/StatisticStudy product sequenceABFCED (N = 4)BCADFE (N = 4)CDBEAF (N = 4)DECFBA (N = 4)EFDACB (N = 4)FAEBDC (N = 4)Overall (N = 24)SexFemale2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)12 (50 %)Male2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)12 (50 %)RaceAmerican Indian/Alaska Native1 (25 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)1 (4 %)Black or African American0 (0 %)1 (25 %)0 (0 %)2 (50 %)0 (0 %)0 (0 %)3 (13 %)White3 (75 %)3 (75 %)4 (100 %)2 (50 %)4 (100 %)4 (100 %)20 (83 %)EthnicityHispanic or Latino1 (25 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)1 (4 %)Not Hispanic or Latino3 (75 %)4 (100 %)4 (100 %)4 (100 %)4 (100 %)4 (100 %)23 (96 %)Age (yrs)Mean38.340.538.335.334.045.838.7SD12.0415.7015.444.039.496.1310.77Height (cm)Mean167.25175.75168.75172.75167.00171.75170.54SD3.40310.2436.1313.7758.2069.8117.331Weight (kg)Mean84.179.270.874.780.083.978.8SD16.5225.896.1513.1312.9314.1414.91BMI (kg/m2)Mean30.05025.19824.79825.01528.94828.37527.064SD6.04395.76220.68543.84646.38104.32554.8512Fagerström Test for Cigarette Dependence ScoreMean4.05.35.06.04.07.85.3SD2.160.51.412.162.162.632.18Median3.55.04.55.54.58.55.0Minimum2544141Maximum767961010 Given the wide variability in smoking behaviors, the controlled use period of the study was intended to provide some degree of standardization of the nicotine \"dose\" associated with each of the study products to better understand potential influences of different vehicles, flavor characteristics and nicotine content on measured parameters. As such, the controlled use period of the study consisted of 50 puffs of the assigned e-cigarette product (5-s puffs at 30-s intervals, approximately 24.5 min of use) and smoking one conventional tobacco cigarette (30-s intervals with the subjects' normal puff duration, approximately 4.5 min of use). Fifty e-cigarette puffs was selected as an optimal controlled \"dose\" as it approximated the dose of nicotine delivered by the cigarette used in the study (~0.8 mg) based on machine yields of the e-cigarettes determined previously using a standardized Canadian Intense puffing protocol [20, 21]. Thus, this controlled \"dose\" period was intended to reflect intensive usage of the e-cigarettes for comparison to the tobacco cigarette used in the study. During the controlled use period, subject puff counts were monitored by the clinical staff and all e-cigarettes were weighed before and after use in order determine the amount of solution consumed. Further, evaluation under the ad lib use conditions provided plasma nicotine levels under uncontrolled, natural use conditions and insights into product use behaviors accompanied by subjective self-assessments of smoking urge. During ad lib use, subjects assigned to an e-cigarette or tobacco cigarette product were allowed to use the product as desired for the entire hour (use of the assigned e-cigarette products in an unrestricted manner or smoking as many tobacco cigarettes as they chose) and subjects were responsible for maintaining their own puff counts. All e-cigarette products were weighed before and after use. The ad lib use period was conducted immediately following the end of the controlled product use session. Blood samples for plasma nicotine determinations, smoking urge assessments, blood pressure, pulse rate, and exhaled CO measurements were also obtained at scheduled time points on each product use day.\n\nBODY.METHODS.PHARMACOKINETICS (PK) - PLASMA NICOTINE:\nBlood samples for the measurement of plasma nicotine concentrations were taken by direct venipuncture at 10 min prior to, and at 5, 10, 15, 20, 25, 30, 45, 60, 75, and 90 min following the start of the controlled product usage on Days 1, 3, 5, 7, 9, and 11. Plasma nicotine was analyzed by LC-MS/MS using validated analytical methods with appropriate quality controls in accordance with applicable FDA Good Laboratory Practice regulations (Title 21 CFR Part 58). The limit of quantification for nicotine was 0.200 ng/mL. The following PK parameters were calculated:Cmax0–30 - maximum observed concentration from time zero to 30 min.AUC0–30 - area under the concentration-time curve from time zero to 30 min.tmax0–30 - time of the maximum concentration from time zero to 30 min.AUC30–90 - area under the concentration-time curve from 30 to 90 min.C90 - maximum observed concentration at 90 min.\n\nBODY.METHODS.PHARMACODYNAMICS – SMOKING URGE:\nSmoking urge was assessed using a simple and subjective 100 mm visual analog scale (VAS). Various forms of VAS have been used in e-cigarette studies as a tool for obtaining various subjective effects data associated with e-cigarette use and measuring nicotine and smoking abstinence symptom suppression [6–10]. Participants were asked to rate \"how strong is your urge to smoke right now?\" by placing a line through a 100 mm line where far left indicated: 'not at all' and far right indicated: 'extremely'. Assessments occurred within 1 min prior to the −10 (pre-product use), 5, 15, 25, 30, 60, and 90-min PK blood draws. The smoking urge change-from-baseline was calculated as the difference between the pre-use and post-use smoking urge result for each tested product. The following pharmacodynamic (PD) parameters were calculated:Emax0–30 - Maximum smoking urge change-from-baseline from time zero to 30 min.Emaxreduction0–30 - Maximum smoking urge reduction from baseline from time zero to 30 min.AUEC0–30 - Area under the effect curve (smoking urge change-from-baseline) from time zero to 30 min.tEmax0–30 - Time of the maximum smoking urge change-from-baseline from time zero to 30 min.AUEC30–90 - Area under the effect curve (smoking urge change-from-baseline) from 30 to 90 min.E90 - Observed smoking urge change-from-baseline at 90 min.\n\nBODY.METHODS.TOLERABILITY AND ADVERSE EVENTS (AES):\nTolerability evaluations included assessments of AEs, vital signs and concomitant medications. AEs reported by the subjects or observed by the clinic staff were assessed for severity (mild, moderate, or severe), as serious or not serious, and relationship to the study products (unrelated, unlikely, possible, probable, or definite) by the Principal Investigator. A study product use-emergent AE was defined as an AE that started or intensified at the time of or after study product usage. An AE that occurred during the washout period between study products was considered study product use-emergent for the last study product given. All reported AEs were coded with Medical Dictionary for Regulatory Activities (MedDRA®), Version 17.0. AEs were recorded by frequency by study product and the number of subjects experiencing product use-emergent AEs.\n\nBODY.METHODS.DATA ANALYSES:\nNon-compartmental PK and smoking urge parameters were calculated using Phoenix® WinNonlin® Version 6.3 (Certara, Princeton, NJ) from the individual nicotine concentration and smoking urge change-from-baseline data. All statistical summarizations and comparisons were calculated using SAS® Version 9.3 (SAS, Cary, NC). Analyses of variance were performed on the plasma nicotine AUC0–30, AUC30–90, Cmax0–30, and C90 parameters, and a Wilcoxon Signed Rank Test was used for the tmax0–30 parameter to assess differences between the e-cigarettes and the tobacco cigarette. Usual brand cigarette flavor (menthol or non-menthol) was included in each of the analyses as a covariate to account for any impact that flavor preference might have had on the results. Repeated measures ANOVA with simple contrasts were used to compare the maximum observed concentrations of plasma nicotine within the first 30 min and at 90 min (Cmax0–30 and C90) to the pre-product use concentration. The smoking urge parameters AUEC0–30, AUEC30–90, Emax0–30, E90, and tmax0–30 were compared using methods similar to those used in the PK analyses. Differences were considered statistically significant at an alpha level of 5 %.\n\nBODY.RESULTS AND DISCUSSION.PARTICIPANT CHARACTERISTICS:\nOf the 24 subjects who were enrolled into the testing phase of the study and randomized to study product sequences, 23 subjects completed the study and one subject withdrew due to a family emergency. A summary of the subjects' demographics together with the results of the Fagerström Test for Cigarette Dependence (FTCD) scores [22, 23] for all study participants by study product sequence and overall is presented in Table 1.\n\nBODY.RESULTS AND DISCUSSION.PRODUCT USE:\nThe mean pre-to-post use product weight differences for the e-cigarettes and the mean puffs taken from all products during each period of the study are provided in Table 2. During controlled product administration, all subjects were to take the same number of puffs (50) from the e-cigarette products with a defined inhalation pattern (5-s puffs every 30 s) while monitored by the clinical staff in an attempt to standardize nicotine intake. The resulting mean weight differences were comparable across study products (0.2238 - 0.2570 g), with the highest and lowest estimated nicotine delivery from the 1.6 % nicotine products. During the ad lib product use period, subjects were free to puff as often as they chose with no limitation on puff duration. As a result, the number of puffs from each of the e-cigarette products varied widely across subjects during the ad lib use period. However, the mean puff counts were fairly comparable across products, averaging between 49.5 and 60.3, with the highest and lowest mean puff counts again noted with the 1.6 % nicotine products. Product weight differences followed the same pattern. Hence, there was no clear indication in either period that subjects compensated during use of the lower nicotine-containing products by taking more or deeper puffs in an attempt to self-administer more nicotine.Table 2Product use summaryControlled Use Period\nAd Lib Use PeriodE-CigarettePuffsPre-to-Post Use Weight Difference (g)PuffsPre-to-Post Use Weight Difference (g)2.4 % nic + Gly50.0 (50–51)0.2251 ± 0.040852.3 (5–128)0.1467 ± 0.0843\nN = 232.4 % nic + Gly/PG50.0 (50–51)0.2349 ± 0.063355.4 (4–136)0.1445 ± 0.0880\nN = 232.4 % nic + Gly + menthol50.0 (50–51)0.2421 ± 0.047758.0 (8–140)0.1604 ± 0.0827\nN = 231.6 % nic + Gly50.0 (50–51)0.2238 ± 0.039960.3 (8–112)0.1738 ± 0.0937\nN = 231.6 % nic + Gly/PG50.0 (50–51)0.2570 ± 0.043349.5 (3–118)0.1194 ± 0.0675\nN = 23PuffsCigarettes SmokedPuffsCigarettes SmokedTobacco Cigarette10.5 (9–13)1.0 (NA)38.6 (16–103)3.6 (2–7)\nN = 24Values for puff count and cigarettes smoked are presented as mean (range). Values for the pre-to-post weight differences are presented as mean ± SD\nnic nicotine, Gly glycerine, PG propylene glycol The mean differences in product weights were observed to be smaller for each e-cigarette (~22 % - 54 %) following ad lib product use compared to the controlled product administration despite the subjects having taken similar numbers of puffs, or more, during ad lib product use. This was likely due to shorter puff durations during ad lib product use. For the tobacco cigarettes, subjects smoked only a single cigarette during the controlled product administration, with an average of 10.5 puffs per cigarette taken. During the ad lib phase subjects smoked an average of 3.6 cigarettes, with approximately the same number of puffs taken per cigarette.\n\nBODY.RESULTS AND DISCUSSION.NICOTINE PHARMACOKINETICS AND BLOOD PLASMA LEVELS:\nThe mean plasma nicotine concentration-time profiles are presented in Fig. 1. Baseline nicotine concentrations were below the limit of quantification in the majority of subjects and mean baseline nicotine concentrations were comparable across study products, all representing less than half the limit of quantification of 0.200 ng/mL. During the controlled use period, mean concentrations increased peaked immediately after use of the single tobacco cigarette (p < 0.05 compared to baseline) and then steadily decreased until reaching a minimum at the end of the period. For all e-cigarettes, nicotine plasma absorption was slower, with mean concentrations steadily increasing with continued product use and peaking at the end of the period. Usage of the study e-cigarettes resulted in statistically significant (p < 0.05) increases from baseline in nicotine concentration as soon as 5 min following the start of product use with the exception of the 1.6 % nicotine product with glycerin, which also reached statistical significance after 10 min of use. Plasma nicotine concentrations steadily increased during the one-hour ad lib use of the tobacco cigarettes, whereas ad lib use of the e-cigarettes resulted in plasma nicotine concentrations that increased during the first 45 min and then plateaued during the last 15 min.Fig. 1Mean Plasma Nicotine Concentration Versus Time The summary of nicotine plasma PK parameters is presented in Table 3. The controlled e-cigarette puffing regimen provided peak (Cmax30) and overall (AUC0-30) nicotine exposures comparable to the tobacco cigarette for the two non-menthol 2.4 % nicotine products and the 1.6 % nicotine product containing glycerin and PG (all differences within 15 % of the tobacco cigarette). However, use of 1.6 % nicotine product containing only glycerin provided significantly lower peak (35 %) and overall (39 %) exposures despite having the highest mean puff counts of any of the e-cigarette products. Use of the menthol product (2.4 % nicotine with glycerin) also yielded a significantly lower overall exposure (16 %) compared to the tobacco cigarette, but a comparable peak concentration. Not surprisingly, the use pattern of the e-cigarette during the controlled use period resulted in a significantly longer time to peak nicotine concentration (Tmax0–30) than for the single tobacco cigarette.Table 3Summary of plasma nicotine pharmacokinetic parameters and statistical comparisons to the tobacco cigaretteE-CigarettesPharmacokinetic Parameters2.4 % nic + Gly2.4 % nic + Gly/PG2.4 % nic + Gly + menthol1.6 % nic + Gly1.6 % nic + Gly/PGTobacco Cigarette\nN = 23\nN = 23\nN = 23\nN = 23\nN = 23\nN = 24Cmax0–30 (ng/mL)17.4 ± 5.9718.1 ± 6.4715.3 ± 5.1610.3 ± 3.70*15.1 ± 4.6115.8 ± 8.64\np-value0.12420.05080.88580.00020.7776Tmax0–30 (hr)0.50 (0.33, 0.55)0.50 (0.33, 0.52)0.50 (0.33, 0.52)0.50 (0.33, 0.61)0.50 (0.33, 0.62)0.09 (0.08, 0.42)\np-value<0.0001<0.0001<0.0001<0.0001<0.0001AUC0–30 (ng*hr/mL)4.7 ± 1.844.9 ± 1.754.1 ± 1.633.0 ± 1.184.3 ± 1.394.9 ± 1.79\np-value0.76360.84430.0406<0.00010.0928AUC30–90 (ng*hr/mL)22.4 ± 5.6124.6 ± 7.9921.6 ± 5.0814.7 ± 5.1519.8 ± 4.7222.1 ± 5.98\np-value0.84630.06020.6319<0.00010.0756C90 (ng/mL)19.7 ± 5.7222.4 ± 7.6519.4 ± 5.8013.7 ± 5.9816.8 ± 4.4429.2 ± 10.86\np-value<0.0001<0.0001<0.0001<0.0001<0.0001Tmax0–30 is presented as median (minimum, maximum), all other values are presented as arithmetic mean ± SD. Statistical significance is based on the differences in least squares means between groups. P-values represent the comparison to the tobacco cigarette\nnic nicotine, Gly glycerine, PG propylene glycol During the ad lib use period, overall nicotine exposure (AUC30–90) was comparable (differences within 11 %) between the tobacco cigarette and all e-cigarette products with the exception of the 1.6 % nicotine product with only glycerin, which provided a 33 % lower overall exposure. However, the nicotine concentration at the end of the ad lib use period was significantly higher following use of the tobacco cigarette compared to the e-cigarettes, with concentrations ranging from 23 % to 53 % lower after 60 min of ad lib use. Overall, the exposure parameters among the e-cigarette products tended to be higher for the 2.4 % nicotine products, higher for the products containing both glycerin and PG, and lower for the product containing menthol compared to the other 2.4 % nicotine products.\n\nBODY.RESULTS AND DISCUSSION.PHARMACODYNAMICS: EFFECTS ON URGE TO SMOKE:\nThe mean smoking urge change from baseline-time profiles are presented in Fig. 2. Mean baseline smoking urge values across all test products were comparable, with the mean visual assessment scale (VAS) responses ranging from 62 to 68 out of 100 (mm).Fig. 2Mean Smoking Urge Change from Baseline versus Time (Note: More Negative Values Indicate a Stronger Urge Reduction) An immediate statistically significant (p < 0.001) reduction in smoking urge was observed at 5 min following the start of controlled use of each study product. Following use of a single tobacco cigarette, the maximum reduction in smoking urge was observed at 5 min, approximately corresponding to the end of use of that study product, and then increased during the remainder of the controlled use period. Continuous use of the e-cigarette products throughout the controlled use period resulted in a maximal reduction in smoking urge also approximately corresponding to the end of product use. During the ad lib use period, use of the tobacco cigarette resulted in a steady decrease in smoking urge, peaking at the final time point, while the response with the e-cigarettes peaked 30 min into ad lib use and stabilized during the final 30 min. Table 4 summarizes the change in smoking urge PD parameters. No statistically significant differences were found in the maximal smoking urge reduction parameters Emax0–30 and Emaxreduction0–30 (all differences within 11 %) or the overall smoking urge reduction parameter AUEC0-30 (all differences within 21 %) between the use of a single tobacco cigarette and each of the e-cigarettes during the controlled use period. The mean time to the maximal reduction in smoking urge was shorter for the tobacco cigarette (15 min) than each of the e-cigarettes (25 – 30 min), but only significantly so for both of the 1.6 % nicotine products and the 2.4 % nicotine product with menthol.Table 4Summary of change in smoking urge parameters and statistical comparisons to the tobacco cigaretteE-CigarettesSmoking Urge Parameters2.4 % nic + Gly2.4 % nic + Gly/PG2.4 % nic + Gly + menthol1.6 % nic + Gly1.6 % nic + Gly/PGTobacco Cigarette\nN = 23\nN = 23\nN = 23\nN = 23\nN = 23\nN = 24Emax0–30 (mm)−41.9 ± 26.56−44.7 ± 26.98−37.0 ± 30.25−40.2 ± 32.47−40.7 ± 24.25−41.5 ± 27.43\np-value0.92850.64270.30230.69720.7450Emaxreduction0–30 (mm)−42.2 ± 26.01−44.7 ± 26.98−39.1 ± 25.86−41.4 ± 30.40−40.7 ± 24.25−41.7 ± 27.08\np-value0.94890.65050.50560.84800.7079AUEC0–30 (hr*mm)−14.1 ± 10.56−15.5 ± 10.39−12.2 ± 10.75−13.9 ± 12.09−13.3 ± 8.29−15.4 ± 12.67\np-value0.40540.89120.08820.35900.2225TEmax0–30 (hr)0.41 (0.07, 0.50)0.41 (0.07, 0.50)0.41 (0.08, 0.50)0.42 (0.08, 0.50)0.49 (0.24, 0.50)0.25 (0.07, 0.51)\np-value0.17640.09480.00140.00120.0009AUEC30–90 (hr*mm)−44.3 ± 35.39−50.9 ± 33.64−42.7 ± 38.24−46.2 ± 40.23−47.7 ± 34.16−46.9 ± 30.14\np-value0.66790.52750.50460.89190.9395E90 (mm)−37.0 ± 30.69−42.5 ± 29.85−34.3 ± 34.03−37.4 ± 35.08−38.5 ± 30.12−52.2 ± 26.33\np-value0.00600.09210.00100.00660.0122TEmax0–30 is presented as median (minimum, maximum), all other values are presented as arithmetic mean ± SD. Statistical significance is based on the differences in least squares means between groups. P-values represent the comparison to the tobacco cigarette\nnic nicotine, Gly glycerine, PG propylene glycol During the ad lib use period, the overall smoking urge reduction (AUEC30-90) achieved with use of the e-cigarettes were comparable to that of the tobacco cigarette, with all differences less than 10 %. However, at the end of the ad lib period, use of the tobacco cigarette resulted in a 19 % to 34 % greater relief of smoking urge (E90) when compared to the e-cigarettes, with statistically significant differences for all except the 2.4 % nicotine product containing both glycerin and PG. The smoking urge reduction appeared comparable among the e-cigarette products with the exception of the 2.4 % nicotine product with menthol which tended to provide a somewhat lower level of relief compared to the other four products.\n\nBODY.RESULTS AND DISCUSSION.TOLERABILITY AND REPORTED ADVERSE EVENTS:\nDuring the course of the study, a total of 38 subjects were exposed to one or more of the study products. There were no serious adverse events reported and no subjects were discontinued due to AEs. Mild product-use-emergent AEs were reported by 18 of 38 subjects provided a study product (including the lead-in period). The number of subjects reporting AEs was similar among products, ranging from 3 to 10 subjects each, with slightly fewer subjects experiencing AEs following use of the menthol-flavored product. The most frequent AE was cough, reported 20 times by 11 subjects (more commonly with use of an e-cigarette product than the tobacco cigarette), followed by throat irritation (8 reports by 5 subjects) and headache (6 reports by 5 subjects), and dizziness (5 reports by 4 subjects). All AEs resolved without sequelae. The observed acute effects of the study products on blood pressure, heart rate and CO levels were previously reported by the authors under a separate publication [24] where it was noted that heart rate and systolic and diastolic blood pressure were significantly elevated after the use of the tobacco cigarette, but the elevation was less after use of most of the e-cigarettes. Furthermore, it was also observed that the use of the e-cigarettes produced no increase the exhaled CO levels, whereas the cigarette significantly increased the exhaled CO more than eight (8) times above the baseline.\n\nBODY.CONCLUSIONS:\nThe key objectives of this study were to examine the nicotine blood plasma levels and smoking urge impacts of various formulations of one brand of e-cigarette for comparison to a conventional tobacco cigarette, and to assess the tolerability and adverse events associated with the study products following short-term use under intensive and naturalistic use conditions. The study design and results are not intended to support the potential for e-cigarettes as harm-reduction products. While not all puffing parameters can be controlled across all subjects, a product use that includes the same number of puffs, puff duration, and puff interval as utilized in this study allows for standardization of nicotine intake to the extent possible in order to accurately characterize uptake and reduction in smoking urge. The small difference in pre-to-post use weight differences across the e-cigarette products supports this. When compared to a tobacco cigarette, the nicotine PK following a controlled, 50-puff use of the e-cigarettes was characterized by slower absorption than from a single tobacco cigarette, but comparable maximal and overall nicotine exposures for all but the 1.6 % nicotine product with glycerin. Analysis of the mean maximum plasma concentrations attained during the controlled use period showed that the non-menthol e-cigarettes containing 2.4 % nicotine exceeded that of the tobacco cigarette (15.8 ng/mL at approximately 5 min) with approximately 25 min of use (17.8 to 18.1 ng/mL). This observation is most likely attributable to the rather intensive e-cigarette puffing regimen (one puff every 30 s) imposed during the controlled use interval when compared to the single tobacco cigarette regimen (average 10.5 puffs total). However, it also suggests that under an intensive puffing scenario, e-cigarettes are capable of delivering similar amounts of nicotine as a tobacco cigarette. The continuous rise in nicotine concentrations achieved with use of the e-cigarettes during the controlled use period appeared to be matched by a consistent decrease in smoking urge, though at the end of the 30-min controlled use period the urge reduction was comparable across those products. Smoking a single cigarette during the controlled use period produced a predictable nicotine concentration-time curve, with a rapid peak nicotine concentration followed by a gradual elimination. The rise and fall of nicotine concentration was matched by the smoking urge response, with a rapid decrease in urge followed by a gradual return toward baseline. While the time to maximum urge reductions was significantly shorter for the tobacco cigarette during the controlled use period compared to each of the e-cigarettes, there were no significant differences in the maximal or overall urge reduction achieved between the two types of products. Ad lib use of the study products can provide insight into use patterns and nicotine concentrations that will allow consumers to satisfy their smoking urge and provide an overall satisfying product use experience. As should be expected, individual use of the products as measured by puff counts varied widely and followed the amount of nicotine solution that was consumed during use as measured by pre-to-post use product weight difference. Use of the e-cigarettes during the ad lib use period resulted in a relatively small increase in nicotine concentration during the first 45 min, followed by a plateau during the last 15 min. In contrast, use of the cigarette during the ad lib use period resulted in a continuous increase in nicotine concentration and continued reduction in smoking urge through the final time point. Following ad lib use of all products for 60 min, the tobacco cigarette yielded a significantly higher nicotine concentration (C90) compared to each of the e-cigarettes. The higher nicotine concentration coincided with a greater reduction in smoking urge at the end of the ad lib use period (E90) for the tobacco cigarette compared to all e-cigarettes, and significantly greater than all except the 2.4 % nicotine product containing glycerin and PG. However, there was no overall impact on urge as assessed by AUEC30–90. This was likely due to the relatively short duration of the ad lib use. Indeed, as the smoking urge response appeared to have reached a plateau with the e-cigarettes and continued to increase with the cigarette following 60 min of ad lib use, a longer evaluation period may have resulted in a difference in the AUEC parameter as well. Among the e-cigarettes, use of the e-cigarettes containing 1.6 % nicotine resulted in lower nicotine exposure compared to the e-cigarettes containing 2.4 % nicotine. However, this did not translate into significant differences in the smoking urge response. Further, while the suppression of urge to smoke appeared comparable between the 1.6 % and the 2.4 % test formulations in the present study, evaluation of the mean puff counts and the amount of solution consumed did not suggest that the subjects compensated by puffing on the lower-nicotine products consciously or subconsciously more so than the higher-nicotine products to achieve a similar level of satisfaction. In addition, higher nicotine content, the presence of PG in the vehicle, and the absence of menthol in the e-cigarettes were found to increase plasma nicotine levels during both controlled and ad lib use. Such factors, however, did not appear to significantly affect smoking urge. However, consistent with prior research documenting the prominent role of conditioned behavior and nicotine addiction, in addition to the concentration of nicotine, the respiratory tract sensory cues and manipulation of smoking materials that are associated with smoking, and mimicked by e-cigarettes, may have had a role in relieving smoking urges [25]. Thus, the present findings further suggest that some of the substantial sensory and behavioral aspects of smoking that are mirrored by e-cigarettes, may be essential elements in the reduction of craving or abstinence symptoms. To the extent that e-cigarettes may provide some measure of reduction in smoking urges, they may serve as cigarette substitutes for smokers who would otherwise seek to smoke a conventional tobacco cigarette. Despite the aggressive puff regime employed in the controlled use period, there were no SAEs in this study and no subjects were discontinued due to AEs. The incidence of AEs was similar among products, with slightly fewer subjects experiencing AEs following use of the menthol-flavored product. The most frequently reported minor AEs were cough, followed by throat irritation, headache, and dizziness. These findings are consistent with other research studies of longer duration that have reported that use of e-cigarettes by adult smokers is well-tolerated as compared to tobacco cigarette use [2, 26]. This study was not without limitations. It was performed at a single site with a small number of subjects, which might lead to the conclusion that the data is not generalizable to a broader population. However, many initial PK evaluations of tobacco and pharmaceutical products are performed using a similar approach. Further, the comparison to the e-cigarettes was made based on a single product use and a short-term ad lib product use to a single brand of tobacco cigarette. Indeed, the trend we noted at the end of the ad lib use period did show that nicotine concentrations appeared to be continuing to rise and smoking urge appeared to be continuing to decrease. Hence, a longer term comparison in future studies would be beneficial. Finally, while the product use data did not provide a clear indication that subjects compensated by using the products with less nicotine more intensively, use of topography measurements providing insight into puff volume and inhalation rate coupled with product evaluation questionnaires in future studies might lend additional information regarding product use behaviors leading to PK and smoking urge responses. Overall, the findings of this study indicate that nicotine uptake from short-term use of e-cigarettes containing 1.6 % and 2.4 % nicotine are significant, but lower than those of tobacco cigarettes, yet the reduction in urge-to-smoke or craving symptoms are broadly comparable. Moreover, it was also found that the short-term use of e-cigarettes by adult smokers is well-tolerated as compared to tobacco cigarette use.\n\n**Question:** Compared to Conventional tobacco cigarette what was the result of E-cigarettes on Nicotine concentration after ad lib use?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
1043
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy of Web-Based Weight Loss Maintenance Programs: A Randomized Controlled Trial Comparing Standard Features Versus the Addition of Enhanced Personalized Feedback over 12 Months\n\n ABSTRACT:\nFew randomized controlled trials (RCT) have evaluated the efficacy of web-based programs targeting maintenance of lost weight. The aims of this study were to evaluate two versions of a commercially available web-based weight loss maintenance (WLM) program and examine whether the provision of enhanced feedback was associated with better WLM. The study was an assessor-blinded RCT of change in body mass index (BMI) over 12 months WLM. Participants were 227 adults (44% male, 42.3 ± 10.1 years, BMI 30.4 ± 4.1 kg/m2) randomized to either a basic (Basic WLM) or enhanced program with additional support (Enhanced WLM). Analysis was intention-to-treat with imputation using last observation carried forward. There was no significant weight rebound from the start of weight loss maintenance to 12 months for either group (mean: basic 1.3%, enhanced 1.5%) and limited change in secondary outcomes for either program. There were no significant between-group differences in the primary outcome of change in BMI (basic −0.5 (1.9) kg/m2, enhanced −0.5 (1.6) kg/m2, p = 0.93). In conclusion, a web-based WLM program was effective in preventing weight regain over one year following weight loss. However, the addition of personalized e-feedback provided limited additional benefits compared to a standard program. Given the potential reach of web-based approaches, further research examining which web-based program components optimize weight outcomes long-term is required.\n\nBODY.1. INTRODUCTION:\nObesity rates are continuing to rise globally [1] in contrast to the limited access to treatment programs. While web-based approaches to treatment could potentially have broad reach, especially as households gain access to broadband internet, their evaluation in the context of longer-term follow-up has been limited. Studies have shown that passive follow-up with no active intervention after weight loss is associated with weight gain [2], with 50–80% of participants gradually regaining weight lost following treatment, with most regain occurring in the first year [3,4,5,6]. A systematic review of weight loss maintenance (WLM) trials found that active WLM interventions facilitated a 1.56 kg (95% CI −2.27 to −0.86 kg) lower weight regain compared with passive follow-up after 12 months. Most WLM studies have required attendance at group sessions and have been hampered by methodological issues, such as mainly recruiting females or mid-aged adults and including only those with a good response to an initial weight loss phase, while excluding those with low adherence. [7]. Few studies have evaluated the use of eHealth technologies (e.g., websites, smartphone applications, text messages) to deliver WLM interventions. Furthermore, most research in this area has looked at short-term interventions (<6 months) without longer-term follow-up [8,9]. The small number of longer-term studies have reported a gradual regain of weight lost following treatment, with regain occurring after the first six months [10,11]. A recent randomized controlled trial examined the effectiveness of a 12-week short message service (SMS) based WLM program following an initial 12-week commercial weight loss program. At three and nine months follow-up, there was no significant difference in weight maintenance between the usual care control group who received a brief telephone call providing lifestyle information and a mailed leaflet with advice on weight loss maintenance and the intervention group who received a weekly SMS to remind them to self-weigh in addition to the call [12]. A systematic review published in 2017, evaluated the efficacy of web-based interventions on weight loss or WLM in adults with overweight or obesity and found web-based interventions were more effective than minimal treatments but less effective than face-to-face interventions [13]. Furthermore, a recent systematic review concluded that there was insufficient evidence to recommend the use of eHealth interventions for WLM [14]. There were however promising results from the eHealth weight loss trials with the addition of newer technologies including text messages, self-monitoring devices, and mobile applications, with meta-analysis demonstrating significantly greater weight loss (1.46 kg [0.80, 2.13], p < 0.001) in interventions with enhanced behavioral features (e.g., individualized counselling, feedback on dietary intake or weight change) or technologies (e.g., addition of text messaging) than standard eHealth interventions. There is some evidence to suggest that these 'enhanced' interventions positively impact participant engagement, retention, and behavior change. We have previously published results of an online commercially available weight loss maintenance (WLM) program which demonstrated limited between-group differences for the basic and enhanced versions of the program after six months [15]. While the enhanced version resulted in greater retention (81.0% vs. 68.5%) and usage of the web-program, it did not result in a difference in weight loss [15]. A study using a 12-month intervention program (6-month weight loss plus a 6-month WLM) compared a self-directed web-based commercial program to a therapist-led behavioral web-based program and found greater weight loss in the behavioral web-based program than the therapist-led program at 12 months [16]. We have previously reported that adults randomized to this therapist-led web-based weight loss program initially lost more weight compared to wait list controls [17], and that there was no difference in weight loss between basic or enhanced versions of the weight loss program after either 12 weeks [17] or after 24 weeks [15] of participation. To date, no WLM trials have specifically evaluated the inclusion of enhanced behavioral features in an eHealth WLM program, including those outlined in Table S1, such as automatically-generated personalized reports; personalized feedback on diet, physical activity, and weight loss, as well as reminders to use the online diary, visit the website, and weigh-in. Therefore, the aim of the current study was to evaluate the impact of two versions of a WLM program on BMI in adults with overweight or obesity who had previously completed a weight loss program. The secondary aim was to compare differences in waist circumference and clinical measures including cholesterol, triglycerides, glucose and insulin between a standard version of a web-based WLM program (basic) and an enhanced version that provided additional personalized e-support.\n\nBODY.2. MATERIALS AND METHODS.2.1. PARTICIPANTS:\nAdults (BMI 25–40 kg/m2), aged 18–60 years were initially recruited into a weight loss trial in 2009 from the Hunter community in NSW, Australia [18]. Written informed consent was obtained from all participants for the weight loss maintenance trial. The study was approved by the University of Newcastle Human Ethics Research Committee (H-2009-0245) on 10 September 2009. The trial was registered with the Australian New Zealand Clinical Trials Registry, anzctr.org.au ACTRN12610000197033.\n\nBODY.2. MATERIALS AND METHODS.2.2. STUDY DESIGN:\nParticipants who completed an initial 24-week web-based weight loss intervention (The Biggest Loser Club, SP Health Co Pty Ltd., North Sydney, Australia) were randomly assigned into one of two weight loss maintenance groups for 12 months using a stratified randomized block design. Participants were allocated to either a basic weight loss maintenance program (Basic WLM) or an enhanced version of the weight loss maintenance program (Enhanced WLM) (see Figure 1). Both participants and the outcome assessors were blinded to group allocation. Detailed methods of the RCT have been published elsewhere [18]. The WLM programs were developed to follow on directly after completion of 24 weeks of a web-based weight loss program [15]. Program features and differences between groups are summarized in Table S1 and also described in detail elsewhere [18]. The Basic WLM group received free access to the weight loss program, but the weekly meal plans were based on an energy intake target equivalent to weight maintenance. The Enhanced WLM group received access to these same meal plans and dietary information, but they also received personalized system-generated feedback on their progress. The feedback on progress was provided using system-generated personalized reports that were populated with the data entered into the platform by the participant. Feedback included progress in relation to weight maintenance goals; usage of the online diet and physical activity diary (which was recommended but not mandated); usage patterns for website features; and level of success with weight loss. Those allocated to the enhanced group also received an escalating scale of email reminders followed by SMS text message reminders (if no response) to use the diary, visit the website, to 'weigh-in' by recording their weekly weight in the online program, and a relapse phone call if their recorded weight rebounded by ≥3% to remind them to return to weight loss mode (Table S1).\n\nBODY.2. MATERIALS AND METHODS.2.3. MEASURES:\nParticipants were assessed at baseline and 12 months [18] by blinded research assistants. Height, weight, waist circumference, and blood pressure were measured using standardized procedures [18] and BMI calculated. Following an overnight fast, blood samples were collected and analyzed for total cholesterol, low density lipoprotein (LDL), high density lipoprotein (HDL) cholesterol, triglycerides, glucose, and insulin by a single National Association of Testing Authorities accredited pathology service using standard automated techniques.\n\nBODY.2. MATERIALS AND METHODS.2.4. DATA ANALYSIS:\nBaseline variables were compared between treatment groups using analysis of variance (ANOVA) for continuous variables and chi-square tests for categorical variables. Analysis of covariance (ANCOVA) was used to test for group differences in outcomes at 12 months after adjusting for the baseline values of the outcome and sex. Analyses were performed in Stata v11 or SAS v9.2 (StataCorp, College Station, TX, USA). Intention-to-treat (ITT) analyses included all participants randomized at the weight loss management phase baseline, with missing follow-up data imputed using the last observation carried forward (LOCF).\n\nBODY.3. RESULTS:\nAt entry into WLM 227 participants were randomized to Basic WLM (n = 114) or Enhanced WLM (n = 113) program versions, with demographic data summarized in Table 1. Mean age was 43.3 (±9.7) years, with a mean BMI of 30.6 (±4.1) kg/m2, 42% were male and 95% were Australian-born (Table 1). There was no significant between-group difference in attrition after 12 months, basic 18.4% and enhanced 23.9%, p = 0.31. Both the Basic and Enhanced WLM groups successfully achieved WLM after 12 months with no significant rebound in weight. There were no significant between-group differences (p > 0.05) for the primary outcome of BMI with mean weight change (12 months—baseline) similar between the basic (−0.5 (1.9) kg/m2) and enhanced groups (−0.5 (1.6) kg/m2), p > 0.05 (Table 2). Table 2 shows that there were no significant differences in secondary outcomes between groups up to 12 months (p > 0.05).\n\nBODY.4. DISCUSSION:\nWhilst participants in both intervention arms of the current trial successfully maintained weight over one year of participating in a web-based program design to support maintenance of lost weight, those with access to the additional features designed to provide support and personalized feedback on diet, physical activity, and weight loss did not achieve better outcomes compared to those receiving access to the online program with minimal support. Both groups regained approximately one-and-a-half kilograms over the 12-month maintenance period, which was not significantly different between groups. This is comparable to results of the active intervention arms from a recent systematic review of all WLM intervention modes that included both food intake and physical activity advice, including group programs, face-to-face, and internet interventions [7]. The review included almost 3000 participants across 25 comparisons and found that the active intervention arms were associated with less weight regain of approximately 1.56 kg, up to 12 months [7]. This suggests that both active intervention arms in the current study were successful in achieving WLM. In our study, it appears that doing 'anything' was enough to support WLM, compared to the no-intervention arms in the systematic review, which also found no evidence that specific modes of intervention delivery were more effective than others. This suggests that having access to the level support provided in both the basic and enhanced versions of the current web-based WLM intervention, following a weight loss intervention did facilitate maintenance of lost weight. Given that it is well established that weight regain is common following initial weight loss [19], the results of the current study are important. Our systematic review evaluating effectiveness of 56 interventions that included a specific dietary component within the WLM intervention found that 14 achieved significant between-group differences at follow-up [2]. Furthermore, our recent systematic review and meta-analysis which evaluated the effectiveness of 84 e-Health interventions, found that e-Health weight loss interventions that include extra support strategies including counselling, personalized feedback, motivational interviewing and/or personal contact, appear to achieve significantly greater weight loss compared with standard eHealth interventions [13]. However, five studies which focused specifically on WLM interventions found no significant difference in weight change between eHealth WLM programs vs. control [14]. The authors concluded that there is currently insufficient evidence to recommended eHealth interventions for WLM and that further high-quality research is required to determine their effectiveness [14]. Although the included studies were heterogeneous the typical intervention characteristics were WLM interventions that lasted 39 weeks on average, had approximately 180 participants and attrition of 26% compared to the current WLM study which lasted 52 weeks had 227 participants and an attrition rate of 21%. The current study is important because individual studies to date suggest that face-to-face contact involving either weekly or monthly individual or group counseling sessions are more likely to achieve WLM than internet support in studies lasting 18 to 30 months. Recent systematic reviews on WLM interventions [7,14] suggest that there is a need for extended support for weight management following participation in weight loss programs internationally, as in-person attendance is not likely to be viable long-term. Hence, prolonged WLM support using eHealth technologies need to be evaluated for both feasibility, engagement, and cost-effectiveness in the long-term. Limitations include the lack of a wait list control group, which is similar to the majority of WLM maintenance studies to date [2]. Although the small sample size and attrition likely reduced the power to detect significant differences between groups for the secondary outcomes, it is similar to other WLM trials lasting 12 or more months which had an average drop-out of 20% [7]. Furthermore. results from the current study need to be interpreted with caution given the meta-analysis within the systematic review [14] indicates that there is no significant difference between the support methods during WLM. This is also supported by our review of web-based interventions which also reported no difference between these two methods of delivery [20]. Strengths include the RCT design, use of blinded assessors, and the comparison of two versions of the WLM program for 12 months, following an initial period of weight loss. Future studies should consider evaluating cost-effectiveness and efficacy in specific population groups for whom access services may be a challenge, based on a range of issues related to time, cost, convenience, rurality, health conditions, or other socio-economic factors. In conclusion, a commercial web-based weight loss maintenance program, with a specific intervention component targeting maintenance of lost weight can be effective at preventing weight re-gain up to one year following 24 weeks of weight loss. While the addition of enhanced features that provide additional feedback and social support did not provide additional benefits during maintenance, health professionals should advise clients that a specific WLM strategy does facilitate WLM up to one year following weight loss. Further research addressing level of feedback and support required to optimize weight status long-term in an online environment is required.\n\n**Question:** Compared to Basic weight loss maintenance program what was the result of Enhanced weight loss maintenance program on Attrition after 12 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
1044
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomised controlled trial to evaluate the efficacy and usability of a computerised phone-based lifestyle coaching system for primary and secondary prevention of stroke\n\n ABSTRACT.BACKGROUND:\nOne of the most effective current approaches to preventing stroke events is the reduction of lifestyle risk factors, such as unhealthy diet, physical inactivity and smoking. In this study, we assessed the efficacy and usability of the phone-based Computer-aided Prevention System (CAPSYS) in supporting the reduction of lifestyle-related risk factors.\n\nABSTRACT.METHODS:\nA single-centre two-arm clinical trial was performed between January 2013 and February 2014, based on individual follow-up periods of six months with 94 patients at high risk of stroke, randomly assigned to an intervention group (IC: 48; advised to use the CAPSYS system) or a standard care group (SC: 46). Study parameters, such as blood pressure, blood values (HDL, LDL, HbA1c, glycaemia and triglycerides), weight, height, physical activity as well as nutrition and smoking habits were captured through questionnaires and medical records at baseline and post-intervention and analysed to detect significant changes. The usability of the intervention was assessed based on the standardised System Usability Scale (SUS) complemented by a more system-specific user satisfaction and feedback questionnaire.\n\nABSTRACT.RESULTS:\nThe statistical evaluation of primary measures revealed significant decreases of systolic blood pressure (mean of the differences = –9 mmHg; p = 0.03; 95 % CI = [–17.29, –0.71]), LDL (pseudo-median of the differences = –7.9 mg/dl; p = 0.04; 95 % CI = [–18.5, –0.5]) and triglyceride values (pseudo-median of the differences = –12.5 mg/dl; p = 0.04; 95 % CI = [–26, –0.5]) in the intervention group, while no such changes could be observed in the control group. Furthermore, we detected a statistically significant increase in self-reported fruit and vegetable consumption (pseudo-median of the differences = 5.4 servings/week; p = 0.04; 95 % CI = [0.5, 10.5]) and a decrease in sweets consumption (pseudo-median of the differences = –2 servings/week; p = 0.04; 95 % CI = [–4, –0.00001]) in the intervention group. The usability assessment showed that the CAPSYS system was, in general, highly accepted by the users (average SUS score: 80.1).\n\nABSTRACT.CONCLUSIONS:\nThe study provided encouraging results indicating that a computerised phone-based lifestyle coaching system, such as CAPSYS, can support the usual treatment in reducing cerebro-cardiovascular risk factors and that such an approach is well applicable in practice.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov Identifier: NCT02444715\n\nBODY.BACKGROUND:\nLifestyle risk factors, such as unhealthy diet, physical inactivity and smoking, have a strong influence on the personal risk of stroke. Reducing such risk factors is currently one of the most effective approaches to preventing stroke events. According to the World Health Organization (WHO), \"most cardiovascular diseases can be prevented by addressing risk factors such as tobacco use, unhealthy diet and obesity, physical inactivity, high blood pressure, diabetes and raised lipids\" [1–3]. However, many individuals have difficulties adhering to recommendations concerning a healthy lifestyle, and long-term compliance with such lifestyle changes is usually low. Early findings in behavioural research support the assumption that the mere task of self-monitoring increases habit awareness, induces reflection on habits, and thus can provoke a positive change of the monitored behaviour [4]. Furthermore, experience has shown that sustained contact is necessary in order to support people in establishing and maintaining lifestyle changes [5]. In this context, the recently developed Computer-aided Prevention System (CAPSYS) offers a cost-effective automated solution combining remote surveillance of individual cerebro-cardiovascular risk factors with tailored advice [6]. Registered patients can access CAPSYS through a phone interface, simply by dialling a landline number. In an automated dialog, patients are asked to provide information about a predefined number of risk factors (current nutrition, physical activity, smoking, weight and blood pressure) by typing in the corresponding answers using the phone keypad. Based on the patient's answers, CAPSYS issues tailored feedback aiming at motivating a positive change towards or maintenance of healthy, risk-reducing habits. The system has been designed as a multi-language tool, in which the languages German and French have already been implemented and used within the trial. Clearly, a telephone as a user interface requires sufficient hearing abilities by the patients as well as the ability to understand and follow the instructions and advice issued by the virtual lifestyle coach. Consequently, this system is not appropriate for use by persons with severe hearing impairments or cognitive limitations. In cases of doubt, a patient's cognitive abilities can be evaluated by means of the Folstein mini-mental state test [7]. On the other hand, there is no problem for patients with difficulties in articulation or speech impediments to use CAPSYS, due to the fact that patient interaction does not involve any speech input but is purely limited to pressing buttons on the telephone keypad. In order to assess the efficacy of the CAPSYS approach concerning the reduction of stroke-related risk factors as well as the usability of the system, a dedicated study was designed and carried out, the results of which are presented in this article.\n\nBODY.METHODS:\nIn 2013 and 2014, a single-centre randomised controlled trial was carried out at the Department of Neurology of the Centre Hospitalier de Luxembourg. The study was designed as a six-month, parallel-group two-arm trial involving an intervention and a control group. Within this six-month study with a relatively limited number of participants, it was not possible to focus on the registration of severe events, such as stroke or death. Instead, the study concentrates on changes in different surrogate parameters for stroke (systolic blood pressure, BMI, HDL, LDL, HbA1c, glycaemia and triglycerides). A draft of the study design has already been presented in [8].\n\nBODY.METHODS.PARTICIPANTS:\nPatients aged 20 and over were eligible for the study if they had already suffered from a stroke or were at high risk of stroke (at least two risk factors according to the criteria described in [2, 9, 10]). The eligibility criteria are summarised in Table 1. Patients' stroke risk and thus their suitability to participate in the CAPSYS study were evaluated by the treating neurologist based on the patients' individual risk factor profiles.\nTable 1Eligibility criteria for the CAPSYS studyInclusion criteriaExclusion criteriaAge: 20 and over; At high risk of stroke:Inability to fill out or to understand the informed consent;∙ Already suffered a stroke or Transient Ischemic Attack (TIA) orNo signed informed consent;∙ At least two risk factors for stroke:– High blood pressure (systolic blood pressure ≥ 140 mmHg)Dementia– Overweight (BMI ≥ 25 kg/m2)– Low physical activity (less than 30 min. of moderate-intensity physical exercise per week)– Smoking– Unhealthy diet\n\nBODY.METHODS.RECRUITMENT AND RANDOMISATION:\nPotential study participants were approached in the course of their regular consultations with their neurologist, where they were screened for eligibility. Prior to their enrolment in the study, all participants signed a written informed consent; 94 participants were randomised by lot to one of two groups: Interventional Care (IC: 48, 34 [71 %] male) or Standard Care (SC: 46, 29 [63 %] male). Participants were aged between 32 and 86 (SC: mean = 59.6; SD = 12.1; median = 59) (IC: mean = 60.7; SD = 11.3; median = 61). The major characteristics of the study cohort are summarised in Table 2. Due to the specific design of the intervention, the study had to be carried out in an only partially blinded manner. It is inherent to the CAPSYS approach that the treating neurologist performs regular reviews of the data collected in the intervention group. The need to have access to patient histories and the possibility to contact patients at any time in case of emerging situations has limited the use of blinding methods within the trial. However, in order to ensure the validity of the study, the CAPSYS study team was separated into three different groups: a) principal investigator and treating neurologist responsible for the initial eligibility check of potential study participants, b) hospital research nurse responsible for patient recruitment and randomisation, and c) healthcare researchers who performed the statistical evaluation of the collected data by using pseudonymised data sets. During the recruitment meeting, patients were informed about the purpose of the study and about the study approach. The CAPSYS study design was approved by the Luxembourg National Research Ethics Committee (CNER) (N° 201205/08) and the National Commission for Data Protection (CNPD) (T007990).\nTable 2Descriptive statistics of the study cohortSC (n = 46)IC (n = 48)Mean age [years] (±SD)59.6 (±12.1)60.7 (±11.3)Men29 (63 %)34 (71 %)Mean BMI [ kg/m2] (±SD)27 (±4.3)28 (±4.3)Smokers2 (4 %)6 (13 %)Hypertension36 (78 %)43 (90 %)Hyper-/Dyslipidemia39 (85 %)44 (92 %)Diabetes mellitus5 (11 %)8 (17 %)Previous stroke/TIA37 (80 %)31 (65 %)\n\nBODY.METHODS.INTERVENTION:\nIn addition to the usual care, participants in the IC group were advised to regularly call the phone-based prevention system CAPSYS (preferably twice a week). In the course of the recruitment process, they were carefully instructed in the proper use of the system, and they were provided a corresponding user manual in form of a leaflet. The participants could access the CAPSYS system by calling a specific land line number with their stationary or mobile phones. After choosing their preferred language (French or German) and authenticating themselves with their individual patient number and PIN code, the participants were posed the following questions: \nHow many servings of fruits and vegetables have you consumed yesterday?How many servings of whole-grain food have you consumed yesterday?How many servings of sweets have you consumed yesterday?Please enter your current weight.How many cigarettes have you smoked yesterday? [removed for non-smokers]How many minutes have you been physically active yesterday? [moderate or high intensity]Please enter your current systolic blood pressure. The participants could answer to these questions by typing in the numerical values (number of servings, weight, minutes etc.) using the phone key pad. During the recruitment meeting at the beginning of the study, participants were instructed on how to estimate the size of one portion for the relevant food groups, and they were advised to measure their current blood pressure and weight, and to write down all necessary values prior to calling the CAPSYS system. Depending on the data provided by the patient, he or she received immediate customized feedback after each answer concerning his or her progress and advice on how to proceed reducing the corresponding risk factor or motivation on maintaining some observed healthy behaviour respectively. The phone call was closed with a concluding statement summing up the feedback issued before by the system. The questions and feedback were automatically generated by CAPSYS and were issued to the patients in spoken natural language by means of a text-to-speech software (TTS). The system was implemented to run fully automatically and could be accessed at any time of day. In case of systolic blood pressure higher than 140 mmHg, the patient was advised to rest for a while and then repeat the measurement. The system also advised the patients to contact a doctor in case of constantly high blood pressure. Patients with values above 140 mmHg were also particularly marked in the web interface when the treating physician accessed the system. Apart from that, all participants have been informed that there were no direct alerts in cases of high blood pressure provided within the study. The treating neurologist could access the data provided by his patients through a web interface, offering both a numerical as well as an adaptable graphical representation of the patient data for a certain period of time. The use of CAPSYS within the study has not replaced or reduced the existing information exchange between the neurologist and the corresponding referring physicians, which was proceeded according to the practices of the Luxembourgish healthcare system. Technically, CAPSYS offers the possibility for further physicians, e.g. the treating general practitioner, to access and monitor the data gathered from their patients. However, in the scope of this study, this option was deliberately discarded in order to avoid potential bias caused by the general practitioner accessing and interpreting the CAPSYS data. In contrast to the IC group, SC participants received only the usual care including blood analyses, blood pressure controls and individual advice on healthy lifestyle during the outpatient treatment given by the neurologist, by the general practitioner and by other physicians. Information material concerning a healthy lifestyle was provided to both groups.\n\nBODY.METHODS.DATA COLLECTION:\nDemographic and medical information of each participant was recorded by the study personnel in a predefined form, including body weight and height, medical history, current medication, smoking habits, blood pressure and heart rate as well as recent blood test results (HDL, LDL, HbA1c, glycaemia and triglycerides). Furthermore, participants were asked to fill in three questionnaires exploring their nutritional habits, physical activity and quality of life. The latter was measured using the standardised EQ-5D-5L instrument provided by the EuroQol Group [11]. Six months after enrolment, participants were asked to fill in the same questionnaires again, and current medical data (weight, blood pressure, heart rate, blood test results, smoking habits and medication) were again recorded by the study personnel. In addition, IC participants were asked to share their experiences with using CAPSYS and to provide their opinions about the system in a predefined questionnaire, which encompasses the well-established System Usability Scale (SUS) [12] and some further more specific questions concerning the usage of CAPSYS and its particular features. There was also space left for open-ended comments and suggestions at the end of the questionnaire.\n\nBODY.METHODS.DEPENDENT MEASURES:\nThe impact of the intervention was assessed based on the changes of cerebro-cardiovascular risk parameters during the six-month trial period, whereby changes in the IC group were compared to those in the control group (SC). Changes in systolic blood pressure, HDL, LDL, HbA1c, glycaemia, triglycerides and BMI were considered as primary dependent measures (see Table 3). Changes in self-reported weekly consumption of fruits and vegetables, whole grain food and sweets as well as changes in self-reported weekly duration of physical activity and quality of life were analysed as secondary measures. Due to the low number of smokers among the study participants (SC: 2, IC: 6), smoking habits were not evaluated.\nTable 3Overview of baseline values and evaluation of primary dependent measures in SC and IC groupsSCICBaselineEvaluationBaselineEvaluation\nParametric data\n\nMean (±SD)\n\nT-test\n\nMean (±SD)\n\nT-test\nSystolic BP [mmHg]139.6 (±21)mean of diff. =0.19145 (±19.4)mean of diff. =−9p =0.96\np\n= 0.03*\n95 % CI = [–7.5, 7.88]95 % CI = [–17.29, –0.71]Cohen's d =0.42BMI [ kg/m2]27 (±4.3)mean of diff. =0.0328 (±4.3)mean of diff. =−0.18p =0.9p =0.4395 % CI = [–0.49, 0.55]95 % CI = [–0.65, 0.28]\nNon-parametric data\n\nMedian [IQR]\n\nWilcoxon\n\nMedian [IQR]\n\nWilcoxon\nHDL [ mg/dl]57 [45, 51]pseudo-median of diff =1.553 [47, 74]pseudo-median of diff =−1.5p =0.53p =0.4495 % CI = [–2.5, 4.5]95 % CI = [–4.5, 2]LDL [ mg/dl]96 [71, 114]pseudo-median of diff =−195 [76, 117]pseudo-median of diff =−7.9p =0.76\np\n= 0.04*\n95 % CI = [–10.5, 6]95 % CI = [–18.5, –0.5]r =0.24HbA1c [ %]5.8 [5.6, 5.9]pseudo-median of diff =05.9 [5.5, 6.25]pseudo-median of diff =0.1p =0.92p =0.3795 % CI = [–0.25, 0.2]95 % CI = [–0.1, 0.25]Glycaemia [ mg/dl]98 [89, 106.75]pseudo-median of diff =0.5102 [92.5, 114.5]pseudo-median of diff =2.5p =0.78p =0.1995 % CI = [–5.5, 5]95 % CI = [–2, 6]Triglycerides [ mg/dl]95 [73, 134.25]pseudo-median of diff =−6105 [78, 132.5]pseudo-median of diff =−12.5p =0.39\np\n= 0.04*\n95 % CI = [–21, 10]95 % CI = [–26, –0.5]r =0.25(statistically significant values are marked in bold) Statistical analyses were carried out on an intention-to-treat basis using RStudio (Version 0.98.978) with a significance level of 0.05. Parametric data (paired and unpaired samples) were evaluated with the Student's t-test approach, and the corresponding effect size was calculated with Cohen's d. Non-parametric data were evaluated using the Wilcoxon signed-rank test (for paired samples) or the Mann-Whitney U test (for unpaired samples) respectively, and effect size was calculated as \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$r = \\frac {Z}{\\sqrt {N}}$\\end{document}r=ZN. For non-parametric paired samples, the pseudo-median is computed as the Hodges-Lehmann estimate.\n\nBODY.RESULTS.EFFICACY OF THE INTERVENTION:\nFinal questionnaires on nutritional habits, physical activity and quality of life could be collected from 68 participants (SC: 36; IC: 32) at the end of the study. This results in an overall dropout rate of 28 % (SC: 22 %; IC: 33 %). Further participant data, such as blood test values, could be retrieved from medical records. Out of the 48 recruited IC participants, 11 (23 %) did not use CAPSYS at all. The remaining 37 IC subjects (77 %) called the system up to 55 times during the study period (mean = 23.32; SD = 14.1; median = 24). Within-subjects analyses of primary dependent measures showed the following statistically significant changes between baseline and post-intervention: \nDecrease of systolic blood pressure in IC group (two-sided paired-samples t-test: mean of the differences = –9 mmHg; p = 0.03; 95 % CI = [–17.29, –0.71]; Cohen's d = 0.42: large effect size) (see Fig. 1), with no corresponding significant difference in SC group (p = 0.96);\nFig. 1Systolic blood pressure. Significant decrease of systolic blood pressure in IC groupDecrease of low-density lipoprotein (LDL) levels in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = –7.9 mg/dl; p = 0.04; 95 % CI = [–18.5, –0.5]; r = 0.24: medium effect size), with no corresponding significant difference in SC group (p = 0.76) (see Fig. 2);\nFig. 2LDL. Significant decrease of LDL levels in IC groupDecrease of triglyceride levels in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = –12.5 mg/dl; p = 0.04; 95 % CI = [–26, –0.5]; r = 0.25: medium effect size), with no corresponding significant difference in SC group (p = 0.39) (see Fig. 3);\nFig. 3Triglycerides. Significant decrease of triglyceride levels in IC groupNo statistically significant changes could be found for any of the other primary measures (BMI, HDL, HbA1c and glycaemia). In this context, it needs to be stated that, at baseline, there were no statistical differences between both groups concerning the proportions of patients taking antihypertensives (SC: 36 [78 %]; IC: 43 [90 %]; p = 0.22), antidiabetic drugs (SC: 5 [11 %]; IC: 8 [17 %]; p = 0.61) or cholesterol-lowering medication (SC: 39 [85 %]; IC: 44 [92 %]; p = 0.47). Concerning the secondary dependent measures, we could observe the following statistically significant results: \nIncrease of self-reported fruit and vegetable consumption in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = 5.4 servings/week; p = 0.04; 95 % CI = [0.5, 10.5]; r = 0.26: medium effect size), with no corresponding significant difference in SC group (p = 0.67);Decrease of self-reported sweets consumption in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = –2 servings/week; p = 0.04; 95 % CI = [–4, –0.00001]; r = 0.26: medium effect size), with no corresponding significant difference in SC group (p = 0.06);Difference between SC and IC groups concerning the change of self-reported whole-grain food consumption (Mann-Whitney U test: difference in location = 3.26 servings/week; p = 0.04; 95 % CI = [0.00006, 7]; r = 0.24: medium effect size);No statistically significant changes for self-reported physical activity and quality of life.\n\nBODY.RESULTS.USABILITY AND USER ACCEPTANCE OF THE INTERVENTION:\nAssessments of the intervention based on the System Usability Scale (SUS) were provided by 28 participants of the IC group, 2 of which were declared invalid, as these participants have demonstrably never used CAPSYS during the trial period, and consequently, they were considered unable to assess the intervention. The average SUS score for CAPSYS was 80.1 (SD = 16.22; median = 78.75; min = 37.5; max = 100; 95 % CI = [73.54, 86.65]). According to Sauro and Lewis, this average score corresponds to an \"A–\" school grade and a percentile of 85–89, meaning that CAPSYS scored better than 85 to 89 % of all systems considered in the Sauro and Lewis study (see [13], chapter 8, page 204, table 8.6). Using the simplified regression equation proposed by Lewis [14], the Likelihood to Recommend for each CAPSYS user was derived from the corresponding SUS scores as LTR = SUS/10. With these values, we could identify 10 promoters (38 %; LTR ≥ 9) and 4 detractors (23 %; LTR ≤ 6) among the 26 participants who provided valid SUS scores, which results in a positive Net Promoter Score (NPS) of 23 [15], meaning that there are 23 % more users who would recommend CAPSYS (promoters) than users who would argue against using the system (detractors). Beside the standardised SUS questionnaire, IC participants were also asked to asses the voice of the text-to-speech module applied in CAPSYS and the automatically generated utterances issued by the system. The results concerning these questions are presented in Table 4.\nTable 4Evaluation of CAPSYS's voice and utterancesScaleResultsVoice1: pleasant – 5: unpleasantmean =1.77;median =1.5;SD =0.991: difficult to understand – 5: easy to understandmean =4.28;median =5;SD =1.141: natural – 5: unnaturalmean =2.15;median =2;SD =1.121: too fast – 5: too slowmean =3.2;median =3;SD =0.5Utterances1: diversified – 5: unvariedmean =3.28;median =4;SD =1.241: inappropriate – 5: appropriatemean =3.46;median =3;SD =1.071: too short – 5: too longmean =3.2;median =3;SD =0.5 Concerning the frequency of use, the majority of the 26 IC participants who answered to the usability questionnaire indicated that they would prefer using CAPSYS once (11; 42 %) or twice (9; 35 %) per week. None of the respondents was willing to use the system more than twice a week, 2 (8 %) preferred to use it less than once a week and 4 (15 %) expressed that they would not use it at all. The approach of connecting to CAPSYS by calling the system whenever they wish was favoured by most of the respondents (18; 69 %) over the potential alternatives of being called by the system (3; 12 %) or receiving reminders to call the system (2; 8 %). Although CAPSYS does not offer speech recognition, we asked users if they prefer to have such an input option instead or in combination with the implemented keypad input. In this context, using the phone keypad was identified as the most preferred way of communication with such a system (20; 77 %) as compared to speech input (1; 4 %) or a combination of both (4; 15 %). The majority of the respondents were interested in having insight into the data history gathered by CAPSYS, either online (9; 35 %) or in a printed version (11; 42 %). Concerning a potential alternative interface for CAPSYS beside the phone interface, 10 (38 %) participants were interested in a corresponding website and 4 (15 %) fancied the idea of having a CAPSYS smartphone app. Regarding the question if the use of CAPSYS has motivated them to a healthier lifestyle, 5 (19 %) of the respondents stated that through the intervention they started eating healthier, 3 (12 %) reported to have become physically more active and 12 (46 %) claimed to have achieved both healthier eating and increased physical activity. This means that, in total, 20 (77 %; 95 % CI = [59 %, 95 %]) of the respondents expressed an improvement of their health-related lifestyle (improved nutrition or increased physical activity or both) through the use of CAPSYS.\n\nBODY.DISCUSSION:\nThe positive effects that could be observed in the IC group concerning systolic blood pressure, LDL and triglyceride levels as well as the self-reported improvements of nutritional habits can be regarded as an indicator that a computerised phone-based intervention, such as CAPSYS, can support the reduction of cerebro-cardiovascular risk when offered in addition to usual care. An average decrease in systolic blood pressure of 9 mmHg among CI participants can be regarded as clinically significant [16]. The same also holds for the observed average decreases of 7.9 mg/dl for LDL [17] and 12.5 mg/dl for triglycerides [18]. Though, the relatively wide 95 % confidence intervals retrieved in the statistical analysis indicate that the cohort size was probably too small to be able to demonstrate strong clinical effects. The evaluation of BMI, HDL, HbA1c and glycaemia values showed no significant changes in the two groups. Our results are to some extent supported by the recently published Australian CLIP study, which found that telephone coaching performed by human health care professionals can result in a significant reduction of LDL and total cholesterol values in the intervention group [19]. A systematic review by Goode et al. also underlines the efficacy of telephone-based lifestyle coaching. In 20 out of 26 studies, telephone support by health professionals have led to significant improvements concerning physical exercises and healthy nutrition [20]. However, personal telephone coaching by health experts requires intensive use of human resources and is associated with enormous costs, which limits its widespread distribution to larger patient groups. CAPSYS is a system that supports prevention initiatives by supporting individuals in daily life settings. It can be assumed that a large percentage of the total target group is not willing to perform any lifestyle changes at all, no matter which kind of support they will receive. So, it has to be taken into account that our study participants are a selected subgroup of the target population with a potentially higher motivation to change risk factors and lifestyle. However, this selection bias in general concerns all prevention initiatives, and the randomisation process should ensure that the general motivation is balanced across both study arms. Compared to therapy studies, participants in prevention studies mostly do not see any direct measurable advantages in their actual health status. Missing short-term improvements are therefore often a reason for higher dropout rates. However, compared to other studies in secondary prevention, the moderate and balanced dropout rates of both CAPSYS arms should not have significantly affected the study results. On the other hand, self-reported consumptions of fruits, vegetables or sweets have a high potential for reporting bias. Therefore, in our study, this kind of information is only considered as secondary depended measures. The improvements of nutritional habits in the IC group (5 servings of fruits and vegetables more per week; 2 servings of sweets less per week) are significant, but have to be treated with caution as they are based on self-reported values and thus less reliable. As the dropout rates were not significantly different for both groups (SC: 22 %; IC: 33 %; p = 0.31), it can be supposed that the withdrawal of IC patients from the clinical trial was not substantially influenced by the intervention itself, but it was due to common reasons, like lack of time, forgetting appointments or just lack of motivation. Furthermore, it remains open how the observed positive effects will evolve after the study period and if they actually are of a long-term nature. The usability evaluation revealed that the CAPSYS approach was well accepted by the study participants. In particular, the large proportion of respondents (77 %) expressing a positive change of health-related behaviour due to the use of CAPSYS is remarkable. One might argue that this result is strongly biased due to the fact that users who have a positive experience with a system are more likely to finish a study and provide corresponding feedback than users with negative experiences. However, even if we consider all dropout IC participants as non-beneficiaries, the success rate concerning a positive health behaviour change would still be 42 % (95 % CI = [27 %, 57 %]). Unfortunately, our cohort was not large enough to perform subgroup analyses in order to evaluate the characteristics of beneficiaries and non-beneficiaries. A follow-up study focussing on such aspects might help to identify appropriate target groups for which a system like CAPSYS might be most beneficial and most accepted. During informal interviews at the end of the study, many of the IC participants explained that they often could not find the time or simply forgot to call CAPSYS. Taking this into account, the implementation of a call reminder functionality seems beneficial, even though in the feedback questionnaire, the majority of the respondents did not favour the concepts of being called or reminded by the system. Another important issue concerning risk factor reduction in the stroke prevention context is the topic of medication adherence. In order to support users in regularly taking their antihypertensive or cholesterol-lowering medication, CAPSYS could be extended to incorporate prompts in its automatically generated feedback messages asking the users if they have taken their prescribed medication according to their individual medication plan. Concerning the technical implementation of CAPSYS, it could be observed that, on average, the artificial voice applied in CAPSYS was perceived as rather pleasant, easy to understand and natural, and the speech rate was perceived as appropriate. The length of the utterances was, on average, perceived as adequate, but the feedback text might also be shortened to some extent. The results also show that there is some space for improvement concerning the diversity and adequacy of the automatically generated feedback. The offered keypad interface for data input was highly accepted by the users, and the need for an alternative input method, such as speech, was detected to be low. Unsurprisingly, only few users were currently interested in a mobile app version of CAPSYS. However, with increasing spread of smartphones, a mobile solution will certainly gain in importance in the future. What is already important for most users is to have access to their own self-monitored data, even if due to technical reasons they might only be able to access them in a printed form.\n\nBODY.CONCLUSION:\nIn summary, the present study provided encouraging results indicating that a computerised phone-based lifestyle coaching system, such as CAPSYS, can support the usual treatment in reducing cerebro-cardiovascular risk factors and that such an approach is in general well accepted by the affected patients. In a next step, CAPSYS could be extended to a disease management tool, involving health professionals from different disciplines, such as neurologists, cardiologists, dieticians, physiotherapists etc., in the treatment of patients. One aspect that could be explored in future research is if the motivational effect of CAPSYS could be strengthened through an increased involvement of the treating physician and through incorporation of social interaction.\n\n**Question:** Compared to Standard care what was the result of CAPSYS system on Systolic blood pressure?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
1045
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomised controlled trial to evaluate the efficacy and usability of a computerised phone-based lifestyle coaching system for primary and secondary prevention of stroke\n\n ABSTRACT.BACKGROUND:\nOne of the most effective current approaches to preventing stroke events is the reduction of lifestyle risk factors, such as unhealthy diet, physical inactivity and smoking. In this study, we assessed the efficacy and usability of the phone-based Computer-aided Prevention System (CAPSYS) in supporting the reduction of lifestyle-related risk factors.\n\nABSTRACT.METHODS:\nA single-centre two-arm clinical trial was performed between January 2013 and February 2014, based on individual follow-up periods of six months with 94 patients at high risk of stroke, randomly assigned to an intervention group (IC: 48; advised to use the CAPSYS system) or a standard care group (SC: 46). Study parameters, such as blood pressure, blood values (HDL, LDL, HbA1c, glycaemia and triglycerides), weight, height, physical activity as well as nutrition and smoking habits were captured through questionnaires and medical records at baseline and post-intervention and analysed to detect significant changes. The usability of the intervention was assessed based on the standardised System Usability Scale (SUS) complemented by a more system-specific user satisfaction and feedback questionnaire.\n\nABSTRACT.RESULTS:\nThe statistical evaluation of primary measures revealed significant decreases of systolic blood pressure (mean of the differences = –9 mmHg; p = 0.03; 95 % CI = [–17.29, –0.71]), LDL (pseudo-median of the differences = –7.9 mg/dl; p = 0.04; 95 % CI = [–18.5, –0.5]) and triglyceride values (pseudo-median of the differences = –12.5 mg/dl; p = 0.04; 95 % CI = [–26, –0.5]) in the intervention group, while no such changes could be observed in the control group. Furthermore, we detected a statistically significant increase in self-reported fruit and vegetable consumption (pseudo-median of the differences = 5.4 servings/week; p = 0.04; 95 % CI = [0.5, 10.5]) and a decrease in sweets consumption (pseudo-median of the differences = –2 servings/week; p = 0.04; 95 % CI = [–4, –0.00001]) in the intervention group. The usability assessment showed that the CAPSYS system was, in general, highly accepted by the users (average SUS score: 80.1).\n\nABSTRACT.CONCLUSIONS:\nThe study provided encouraging results indicating that a computerised phone-based lifestyle coaching system, such as CAPSYS, can support the usual treatment in reducing cerebro-cardiovascular risk factors and that such an approach is well applicable in practice.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov Identifier: NCT02444715\n\nBODY.BACKGROUND:\nLifestyle risk factors, such as unhealthy diet, physical inactivity and smoking, have a strong influence on the personal risk of stroke. Reducing such risk factors is currently one of the most effective approaches to preventing stroke events. According to the World Health Organization (WHO), \"most cardiovascular diseases can be prevented by addressing risk factors such as tobacco use, unhealthy diet and obesity, physical inactivity, high blood pressure, diabetes and raised lipids\" [1–3]. However, many individuals have difficulties adhering to recommendations concerning a healthy lifestyle, and long-term compliance with such lifestyle changes is usually low. Early findings in behavioural research support the assumption that the mere task of self-monitoring increases habit awareness, induces reflection on habits, and thus can provoke a positive change of the monitored behaviour [4]. Furthermore, experience has shown that sustained contact is necessary in order to support people in establishing and maintaining lifestyle changes [5]. In this context, the recently developed Computer-aided Prevention System (CAPSYS) offers a cost-effective automated solution combining remote surveillance of individual cerebro-cardiovascular risk factors with tailored advice [6]. Registered patients can access CAPSYS through a phone interface, simply by dialling a landline number. In an automated dialog, patients are asked to provide information about a predefined number of risk factors (current nutrition, physical activity, smoking, weight and blood pressure) by typing in the corresponding answers using the phone keypad. Based on the patient's answers, CAPSYS issues tailored feedback aiming at motivating a positive change towards or maintenance of healthy, risk-reducing habits. The system has been designed as a multi-language tool, in which the languages German and French have already been implemented and used within the trial. Clearly, a telephone as a user interface requires sufficient hearing abilities by the patients as well as the ability to understand and follow the instructions and advice issued by the virtual lifestyle coach. Consequently, this system is not appropriate for use by persons with severe hearing impairments or cognitive limitations. In cases of doubt, a patient's cognitive abilities can be evaluated by means of the Folstein mini-mental state test [7]. On the other hand, there is no problem for patients with difficulties in articulation or speech impediments to use CAPSYS, due to the fact that patient interaction does not involve any speech input but is purely limited to pressing buttons on the telephone keypad. In order to assess the efficacy of the CAPSYS approach concerning the reduction of stroke-related risk factors as well as the usability of the system, a dedicated study was designed and carried out, the results of which are presented in this article.\n\nBODY.METHODS:\nIn 2013 and 2014, a single-centre randomised controlled trial was carried out at the Department of Neurology of the Centre Hospitalier de Luxembourg. The study was designed as a six-month, parallel-group two-arm trial involving an intervention and a control group. Within this six-month study with a relatively limited number of participants, it was not possible to focus on the registration of severe events, such as stroke or death. Instead, the study concentrates on changes in different surrogate parameters for stroke (systolic blood pressure, BMI, HDL, LDL, HbA1c, glycaemia and triglycerides). A draft of the study design has already been presented in [8].\n\nBODY.METHODS.PARTICIPANTS:\nPatients aged 20 and over were eligible for the study if they had already suffered from a stroke or were at high risk of stroke (at least two risk factors according to the criteria described in [2, 9, 10]). The eligibility criteria are summarised in Table 1. Patients' stroke risk and thus their suitability to participate in the CAPSYS study were evaluated by the treating neurologist based on the patients' individual risk factor profiles.\nTable 1Eligibility criteria for the CAPSYS studyInclusion criteriaExclusion criteriaAge: 20 and over; At high risk of stroke:Inability to fill out or to understand the informed consent;∙ Already suffered a stroke or Transient Ischemic Attack (TIA) orNo signed informed consent;∙ At least two risk factors for stroke:– High blood pressure (systolic blood pressure ≥ 140 mmHg)Dementia– Overweight (BMI ≥ 25 kg/m2)– Low physical activity (less than 30 min. of moderate-intensity physical exercise per week)– Smoking– Unhealthy diet\n\nBODY.METHODS.RECRUITMENT AND RANDOMISATION:\nPotential study participants were approached in the course of their regular consultations with their neurologist, where they were screened for eligibility. Prior to their enrolment in the study, all participants signed a written informed consent; 94 participants were randomised by lot to one of two groups: Interventional Care (IC: 48, 34 [71 %] male) or Standard Care (SC: 46, 29 [63 %] male). Participants were aged between 32 and 86 (SC: mean = 59.6; SD = 12.1; median = 59) (IC: mean = 60.7; SD = 11.3; median = 61). The major characteristics of the study cohort are summarised in Table 2. Due to the specific design of the intervention, the study had to be carried out in an only partially blinded manner. It is inherent to the CAPSYS approach that the treating neurologist performs regular reviews of the data collected in the intervention group. The need to have access to patient histories and the possibility to contact patients at any time in case of emerging situations has limited the use of blinding methods within the trial. However, in order to ensure the validity of the study, the CAPSYS study team was separated into three different groups: a) principal investigator and treating neurologist responsible for the initial eligibility check of potential study participants, b) hospital research nurse responsible for patient recruitment and randomisation, and c) healthcare researchers who performed the statistical evaluation of the collected data by using pseudonymised data sets. During the recruitment meeting, patients were informed about the purpose of the study and about the study approach. The CAPSYS study design was approved by the Luxembourg National Research Ethics Committee (CNER) (N° 201205/08) and the National Commission for Data Protection (CNPD) (T007990).\nTable 2Descriptive statistics of the study cohortSC (n = 46)IC (n = 48)Mean age [years] (±SD)59.6 (±12.1)60.7 (±11.3)Men29 (63 %)34 (71 %)Mean BMI [ kg/m2] (±SD)27 (±4.3)28 (±4.3)Smokers2 (4 %)6 (13 %)Hypertension36 (78 %)43 (90 %)Hyper-/Dyslipidemia39 (85 %)44 (92 %)Diabetes mellitus5 (11 %)8 (17 %)Previous stroke/TIA37 (80 %)31 (65 %)\n\nBODY.METHODS.INTERVENTION:\nIn addition to the usual care, participants in the IC group were advised to regularly call the phone-based prevention system CAPSYS (preferably twice a week). In the course of the recruitment process, they were carefully instructed in the proper use of the system, and they were provided a corresponding user manual in form of a leaflet. The participants could access the CAPSYS system by calling a specific land line number with their stationary or mobile phones. After choosing their preferred language (French or German) and authenticating themselves with their individual patient number and PIN code, the participants were posed the following questions: \nHow many servings of fruits and vegetables have you consumed yesterday?How many servings of whole-grain food have you consumed yesterday?How many servings of sweets have you consumed yesterday?Please enter your current weight.How many cigarettes have you smoked yesterday? [removed for non-smokers]How many minutes have you been physically active yesterday? [moderate or high intensity]Please enter your current systolic blood pressure. The participants could answer to these questions by typing in the numerical values (number of servings, weight, minutes etc.) using the phone key pad. During the recruitment meeting at the beginning of the study, participants were instructed on how to estimate the size of one portion for the relevant food groups, and they were advised to measure their current blood pressure and weight, and to write down all necessary values prior to calling the CAPSYS system. Depending on the data provided by the patient, he or she received immediate customized feedback after each answer concerning his or her progress and advice on how to proceed reducing the corresponding risk factor or motivation on maintaining some observed healthy behaviour respectively. The phone call was closed with a concluding statement summing up the feedback issued before by the system. The questions and feedback were automatically generated by CAPSYS and were issued to the patients in spoken natural language by means of a text-to-speech software (TTS). The system was implemented to run fully automatically and could be accessed at any time of day. In case of systolic blood pressure higher than 140 mmHg, the patient was advised to rest for a while and then repeat the measurement. The system also advised the patients to contact a doctor in case of constantly high blood pressure. Patients with values above 140 mmHg were also particularly marked in the web interface when the treating physician accessed the system. Apart from that, all participants have been informed that there were no direct alerts in cases of high blood pressure provided within the study. The treating neurologist could access the data provided by his patients through a web interface, offering both a numerical as well as an adaptable graphical representation of the patient data for a certain period of time. The use of CAPSYS within the study has not replaced or reduced the existing information exchange between the neurologist and the corresponding referring physicians, which was proceeded according to the practices of the Luxembourgish healthcare system. Technically, CAPSYS offers the possibility for further physicians, e.g. the treating general practitioner, to access and monitor the data gathered from their patients. However, in the scope of this study, this option was deliberately discarded in order to avoid potential bias caused by the general practitioner accessing and interpreting the CAPSYS data. In contrast to the IC group, SC participants received only the usual care including blood analyses, blood pressure controls and individual advice on healthy lifestyle during the outpatient treatment given by the neurologist, by the general practitioner and by other physicians. Information material concerning a healthy lifestyle was provided to both groups.\n\nBODY.METHODS.DATA COLLECTION:\nDemographic and medical information of each participant was recorded by the study personnel in a predefined form, including body weight and height, medical history, current medication, smoking habits, blood pressure and heart rate as well as recent blood test results (HDL, LDL, HbA1c, glycaemia and triglycerides). Furthermore, participants were asked to fill in three questionnaires exploring their nutritional habits, physical activity and quality of life. The latter was measured using the standardised EQ-5D-5L instrument provided by the EuroQol Group [11]. Six months after enrolment, participants were asked to fill in the same questionnaires again, and current medical data (weight, blood pressure, heart rate, blood test results, smoking habits and medication) were again recorded by the study personnel. In addition, IC participants were asked to share their experiences with using CAPSYS and to provide their opinions about the system in a predefined questionnaire, which encompasses the well-established System Usability Scale (SUS) [12] and some further more specific questions concerning the usage of CAPSYS and its particular features. There was also space left for open-ended comments and suggestions at the end of the questionnaire.\n\nBODY.METHODS.DEPENDENT MEASURES:\nThe impact of the intervention was assessed based on the changes of cerebro-cardiovascular risk parameters during the six-month trial period, whereby changes in the IC group were compared to those in the control group (SC). Changes in systolic blood pressure, HDL, LDL, HbA1c, glycaemia, triglycerides and BMI were considered as primary dependent measures (see Table 3). Changes in self-reported weekly consumption of fruits and vegetables, whole grain food and sweets as well as changes in self-reported weekly duration of physical activity and quality of life were analysed as secondary measures. Due to the low number of smokers among the study participants (SC: 2, IC: 6), smoking habits were not evaluated.\nTable 3Overview of baseline values and evaluation of primary dependent measures in SC and IC groupsSCICBaselineEvaluationBaselineEvaluation\nParametric data\n\nMean (±SD)\n\nT-test\n\nMean (±SD)\n\nT-test\nSystolic BP [mmHg]139.6 (±21)mean of diff. =0.19145 (±19.4)mean of diff. =−9p =0.96\np\n= 0.03*\n95 % CI = [–7.5, 7.88]95 % CI = [–17.29, –0.71]Cohen's d =0.42BMI [ kg/m2]27 (±4.3)mean of diff. =0.0328 (±4.3)mean of diff. =−0.18p =0.9p =0.4395 % CI = [–0.49, 0.55]95 % CI = [–0.65, 0.28]\nNon-parametric data\n\nMedian [IQR]\n\nWilcoxon\n\nMedian [IQR]\n\nWilcoxon\nHDL [ mg/dl]57 [45, 51]pseudo-median of diff =1.553 [47, 74]pseudo-median of diff =−1.5p =0.53p =0.4495 % CI = [–2.5, 4.5]95 % CI = [–4.5, 2]LDL [ mg/dl]96 [71, 114]pseudo-median of diff =−195 [76, 117]pseudo-median of diff =−7.9p =0.76\np\n= 0.04*\n95 % CI = [–10.5, 6]95 % CI = [–18.5, –0.5]r =0.24HbA1c [ %]5.8 [5.6, 5.9]pseudo-median of diff =05.9 [5.5, 6.25]pseudo-median of diff =0.1p =0.92p =0.3795 % CI = [–0.25, 0.2]95 % CI = [–0.1, 0.25]Glycaemia [ mg/dl]98 [89, 106.75]pseudo-median of diff =0.5102 [92.5, 114.5]pseudo-median of diff =2.5p =0.78p =0.1995 % CI = [–5.5, 5]95 % CI = [–2, 6]Triglycerides [ mg/dl]95 [73, 134.25]pseudo-median of diff =−6105 [78, 132.5]pseudo-median of diff =−12.5p =0.39\np\n= 0.04*\n95 % CI = [–21, 10]95 % CI = [–26, –0.5]r =0.25(statistically significant values are marked in bold) Statistical analyses were carried out on an intention-to-treat basis using RStudio (Version 0.98.978) with a significance level of 0.05. Parametric data (paired and unpaired samples) were evaluated with the Student's t-test approach, and the corresponding effect size was calculated with Cohen's d. Non-parametric data were evaluated using the Wilcoxon signed-rank test (for paired samples) or the Mann-Whitney U test (for unpaired samples) respectively, and effect size was calculated as \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$r = \\frac {Z}{\\sqrt {N}}$\\end{document}r=ZN. For non-parametric paired samples, the pseudo-median is computed as the Hodges-Lehmann estimate.\n\nBODY.RESULTS.EFFICACY OF THE INTERVENTION:\nFinal questionnaires on nutritional habits, physical activity and quality of life could be collected from 68 participants (SC: 36; IC: 32) at the end of the study. This results in an overall dropout rate of 28 % (SC: 22 %; IC: 33 %). Further participant data, such as blood test values, could be retrieved from medical records. Out of the 48 recruited IC participants, 11 (23 %) did not use CAPSYS at all. The remaining 37 IC subjects (77 %) called the system up to 55 times during the study period (mean = 23.32; SD = 14.1; median = 24). Within-subjects analyses of primary dependent measures showed the following statistically significant changes between baseline and post-intervention: \nDecrease of systolic blood pressure in IC group (two-sided paired-samples t-test: mean of the differences = –9 mmHg; p = 0.03; 95 % CI = [–17.29, –0.71]; Cohen's d = 0.42: large effect size) (see Fig. 1), with no corresponding significant difference in SC group (p = 0.96);\nFig. 1Systolic blood pressure. Significant decrease of systolic blood pressure in IC groupDecrease of low-density lipoprotein (LDL) levels in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = –7.9 mg/dl; p = 0.04; 95 % CI = [–18.5, –0.5]; r = 0.24: medium effect size), with no corresponding significant difference in SC group (p = 0.76) (see Fig. 2);\nFig. 2LDL. Significant decrease of LDL levels in IC groupDecrease of triglyceride levels in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = –12.5 mg/dl; p = 0.04; 95 % CI = [–26, –0.5]; r = 0.25: medium effect size), with no corresponding significant difference in SC group (p = 0.39) (see Fig. 3);\nFig. 3Triglycerides. Significant decrease of triglyceride levels in IC groupNo statistically significant changes could be found for any of the other primary measures (BMI, HDL, HbA1c and glycaemia). In this context, it needs to be stated that, at baseline, there were no statistical differences between both groups concerning the proportions of patients taking antihypertensives (SC: 36 [78 %]; IC: 43 [90 %]; p = 0.22), antidiabetic drugs (SC: 5 [11 %]; IC: 8 [17 %]; p = 0.61) or cholesterol-lowering medication (SC: 39 [85 %]; IC: 44 [92 %]; p = 0.47). Concerning the secondary dependent measures, we could observe the following statistically significant results: \nIncrease of self-reported fruit and vegetable consumption in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = 5.4 servings/week; p = 0.04; 95 % CI = [0.5, 10.5]; r = 0.26: medium effect size), with no corresponding significant difference in SC group (p = 0.67);Decrease of self-reported sweets consumption in IC group (Wilcoxon signed-rank test: pseudo-median of the differences = –2 servings/week; p = 0.04; 95 % CI = [–4, –0.00001]; r = 0.26: medium effect size), with no corresponding significant difference in SC group (p = 0.06);Difference between SC and IC groups concerning the change of self-reported whole-grain food consumption (Mann-Whitney U test: difference in location = 3.26 servings/week; p = 0.04; 95 % CI = [0.00006, 7]; r = 0.24: medium effect size);No statistically significant changes for self-reported physical activity and quality of life.\n\nBODY.RESULTS.USABILITY AND USER ACCEPTANCE OF THE INTERVENTION:\nAssessments of the intervention based on the System Usability Scale (SUS) were provided by 28 participants of the IC group, 2 of which were declared invalid, as these participants have demonstrably never used CAPSYS during the trial period, and consequently, they were considered unable to assess the intervention. The average SUS score for CAPSYS was 80.1 (SD = 16.22; median = 78.75; min = 37.5; max = 100; 95 % CI = [73.54, 86.65]). According to Sauro and Lewis, this average score corresponds to an \"A–\" school grade and a percentile of 85–89, meaning that CAPSYS scored better than 85 to 89 % of all systems considered in the Sauro and Lewis study (see [13], chapter 8, page 204, table 8.6). Using the simplified regression equation proposed by Lewis [14], the Likelihood to Recommend for each CAPSYS user was derived from the corresponding SUS scores as LTR = SUS/10. With these values, we could identify 10 promoters (38 %; LTR ≥ 9) and 4 detractors (23 %; LTR ≤ 6) among the 26 participants who provided valid SUS scores, which results in a positive Net Promoter Score (NPS) of 23 [15], meaning that there are 23 % more users who would recommend CAPSYS (promoters) than users who would argue against using the system (detractors). Beside the standardised SUS questionnaire, IC participants were also asked to asses the voice of the text-to-speech module applied in CAPSYS and the automatically generated utterances issued by the system. The results concerning these questions are presented in Table 4.\nTable 4Evaluation of CAPSYS's voice and utterancesScaleResultsVoice1: pleasant – 5: unpleasantmean =1.77;median =1.5;SD =0.991: difficult to understand – 5: easy to understandmean =4.28;median =5;SD =1.141: natural – 5: unnaturalmean =2.15;median =2;SD =1.121: too fast – 5: too slowmean =3.2;median =3;SD =0.5Utterances1: diversified – 5: unvariedmean =3.28;median =4;SD =1.241: inappropriate – 5: appropriatemean =3.46;median =3;SD =1.071: too short – 5: too longmean =3.2;median =3;SD =0.5 Concerning the frequency of use, the majority of the 26 IC participants who answered to the usability questionnaire indicated that they would prefer using CAPSYS once (11; 42 %) or twice (9; 35 %) per week. None of the respondents was willing to use the system more than twice a week, 2 (8 %) preferred to use it less than once a week and 4 (15 %) expressed that they would not use it at all. The approach of connecting to CAPSYS by calling the system whenever they wish was favoured by most of the respondents (18; 69 %) over the potential alternatives of being called by the system (3; 12 %) or receiving reminders to call the system (2; 8 %). Although CAPSYS does not offer speech recognition, we asked users if they prefer to have such an input option instead or in combination with the implemented keypad input. In this context, using the phone keypad was identified as the most preferred way of communication with such a system (20; 77 %) as compared to speech input (1; 4 %) or a combination of both (4; 15 %). The majority of the respondents were interested in having insight into the data history gathered by CAPSYS, either online (9; 35 %) or in a printed version (11; 42 %). Concerning a potential alternative interface for CAPSYS beside the phone interface, 10 (38 %) participants were interested in a corresponding website and 4 (15 %) fancied the idea of having a CAPSYS smartphone app. Regarding the question if the use of CAPSYS has motivated them to a healthier lifestyle, 5 (19 %) of the respondents stated that through the intervention they started eating healthier, 3 (12 %) reported to have become physically more active and 12 (46 %) claimed to have achieved both healthier eating and increased physical activity. This means that, in total, 20 (77 %; 95 % CI = [59 %, 95 %]) of the respondents expressed an improvement of their health-related lifestyle (improved nutrition or increased physical activity or both) through the use of CAPSYS.\n\nBODY.DISCUSSION:\nThe positive effects that could be observed in the IC group concerning systolic blood pressure, LDL and triglyceride levels as well as the self-reported improvements of nutritional habits can be regarded as an indicator that a computerised phone-based intervention, such as CAPSYS, can support the reduction of cerebro-cardiovascular risk when offered in addition to usual care. An average decrease in systolic blood pressure of 9 mmHg among CI participants can be regarded as clinically significant [16]. The same also holds for the observed average decreases of 7.9 mg/dl for LDL [17] and 12.5 mg/dl for triglycerides [18]. Though, the relatively wide 95 % confidence intervals retrieved in the statistical analysis indicate that the cohort size was probably too small to be able to demonstrate strong clinical effects. The evaluation of BMI, HDL, HbA1c and glycaemia values showed no significant changes in the two groups. Our results are to some extent supported by the recently published Australian CLIP study, which found that telephone coaching performed by human health care professionals can result in a significant reduction of LDL and total cholesterol values in the intervention group [19]. A systematic review by Goode et al. also underlines the efficacy of telephone-based lifestyle coaching. In 20 out of 26 studies, telephone support by health professionals have led to significant improvements concerning physical exercises and healthy nutrition [20]. However, personal telephone coaching by health experts requires intensive use of human resources and is associated with enormous costs, which limits its widespread distribution to larger patient groups. CAPSYS is a system that supports prevention initiatives by supporting individuals in daily life settings. It can be assumed that a large percentage of the total target group is not willing to perform any lifestyle changes at all, no matter which kind of support they will receive. So, it has to be taken into account that our study participants are a selected subgroup of the target population with a potentially higher motivation to change risk factors and lifestyle. However, this selection bias in general concerns all prevention initiatives, and the randomisation process should ensure that the general motivation is balanced across both study arms. Compared to therapy studies, participants in prevention studies mostly do not see any direct measurable advantages in their actual health status. Missing short-term improvements are therefore often a reason for higher dropout rates. However, compared to other studies in secondary prevention, the moderate and balanced dropout rates of both CAPSYS arms should not have significantly affected the study results. On the other hand, self-reported consumptions of fruits, vegetables or sweets have a high potential for reporting bias. Therefore, in our study, this kind of information is only considered as secondary depended measures. The improvements of nutritional habits in the IC group (5 servings of fruits and vegetables more per week; 2 servings of sweets less per week) are significant, but have to be treated with caution as they are based on self-reported values and thus less reliable. As the dropout rates were not significantly different for both groups (SC: 22 %; IC: 33 %; p = 0.31), it can be supposed that the withdrawal of IC patients from the clinical trial was not substantially influenced by the intervention itself, but it was due to common reasons, like lack of time, forgetting appointments or just lack of motivation. Furthermore, it remains open how the observed positive effects will evolve after the study period and if they actually are of a long-term nature. The usability evaluation revealed that the CAPSYS approach was well accepted by the study participants. In particular, the large proportion of respondents (77 %) expressing a positive change of health-related behaviour due to the use of CAPSYS is remarkable. One might argue that this result is strongly biased due to the fact that users who have a positive experience with a system are more likely to finish a study and provide corresponding feedback than users with negative experiences. However, even if we consider all dropout IC participants as non-beneficiaries, the success rate concerning a positive health behaviour change would still be 42 % (95 % CI = [27 %, 57 %]). Unfortunately, our cohort was not large enough to perform subgroup analyses in order to evaluate the characteristics of beneficiaries and non-beneficiaries. A follow-up study focussing on such aspects might help to identify appropriate target groups for which a system like CAPSYS might be most beneficial and most accepted. During informal interviews at the end of the study, many of the IC participants explained that they often could not find the time or simply forgot to call CAPSYS. Taking this into account, the implementation of a call reminder functionality seems beneficial, even though in the feedback questionnaire, the majority of the respondents did not favour the concepts of being called or reminded by the system. Another important issue concerning risk factor reduction in the stroke prevention context is the topic of medication adherence. In order to support users in regularly taking their antihypertensive or cholesterol-lowering medication, CAPSYS could be extended to incorporate prompts in its automatically generated feedback messages asking the users if they have taken their prescribed medication according to their individual medication plan. Concerning the technical implementation of CAPSYS, it could be observed that, on average, the artificial voice applied in CAPSYS was perceived as rather pleasant, easy to understand and natural, and the speech rate was perceived as appropriate. The length of the utterances was, on average, perceived as adequate, but the feedback text might also be shortened to some extent. The results also show that there is some space for improvement concerning the diversity and adequacy of the automatically generated feedback. The offered keypad interface for data input was highly accepted by the users, and the need for an alternative input method, such as speech, was detected to be low. Unsurprisingly, only few users were currently interested in a mobile app version of CAPSYS. However, with increasing spread of smartphones, a mobile solution will certainly gain in importance in the future. What is already important for most users is to have access to their own self-monitored data, even if due to technical reasons they might only be able to access them in a printed form.\n\nBODY.CONCLUSION:\nIn summary, the present study provided encouraging results indicating that a computerised phone-based lifestyle coaching system, such as CAPSYS, can support the usual treatment in reducing cerebro-cardiovascular risk factors and that such an approach is in general well accepted by the affected patients. In a next step, CAPSYS could be extended to a disease management tool, involving health professionals from different disciplines, such as neurologists, cardiologists, dieticians, physiotherapists etc., in the treatment of patients. One aspect that could be explored in future research is if the motivational effect of CAPSYS could be strengthened through an increased involvement of the treating physician and through incorporation of social interaction.\n\n**Question:** Compared to Standard care what was the result of CAPSYS system on LDL values?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.