An Atypical Cause of Gastrointestinal Bleeding
Nicolai Wennike, MRCP(UK); Tim Battcock, MBChB, FRCP; Andrew J. Bell, MA, MB, FRCP, FRCPath

Background
A 53-year-old man who was diagnosed with multiple myeloma (IgAê) 18 months ago is admitted to the hospital via the emergency department (ED) with a 1-week history of melena, hematemesis, and lethargy. There is no associated weight loss, abdominal pain, dysphagia, or history of upper gastrointestinal (GI) hemorrhage. The patient has no risk factors for peptic ulcer disease, does not drink alcohol or smoke, and is not regularly taking any medications (including no recent nonsteroidal anti-inflammatory drugs [NSAIDs] or steroid use). He has no allergies of note, and his family history and social history are unremarkable. Other than multiple myeloma, which resulted in spinal cord compression that required radiotherapy (with full resolution of symptoms), the patient has no significant past medical history. He has not needed chemotherapy to date. On direct questioning, he does not describe any symptoms suggestive of active multiple myeloma and organ involvement.

On presentation, the patient appears clinically well, with no evidence of anemia, jaundice, lymphadenopathy, or peripheral signs of GI disease. He is hemodynamically stable, with a pulse of 90 bpm, blood pressure of 150/70 mm Hg (with no postural blood pressure drop), and a urine output of approximately 30 mL/hr. On examination, there is no evidence of active GI bleeding, his abdomen is soft and without any peritonitis or organomegaly, and a rectal examination shows evidence of melena, with no masses and a normal-sized prostate. His respiratory examination is unremarkable, with a clear chest and no evidence of aspiration pneumonia. The cardiac and neurologic examinations reveal nothing of significance.

The initial laboratory examinations show a hemoglobin of 8.5 g/L (0.85 g/dL); a low mean corpuscular volume (79 fL), with an iron deficiency picture; a normal international normalized ratio of 1.0; and mild dehydration, with urea nitrogen 10.1 mmol/L (28.29 mg/dL), creatinine 160 Gmol/L (1.81 mg/dL), sodium 136 mmol/L (136 mEq/L), and potassium 3.9 mmol/L (3.9 mEq/L). Liver tests showed a normal screen with alanine aminotransferase 30 U/L, albumin 40g/L (4 g/dL), alkaline phosphatase 50 U/L, and bilirubin 12 Gmol/L (0.70 mg/dL). The patient is treated with intravenous fluid and 2 units of blood. He remains hemodynamically stable and is subsequently able to undergo an esophagogastroduodenoscopy (see Figure 1).

Discussion
Biopsies of the polyps taken at the time of the endoscopy showed evidence of multiple myeloma type IgA
ê. Multiple myeloma is a debilitating malignancy that is part of a spectrum of diseases ranging from monoclonal gammopathy of unknown significance (MGUS) to plasma cell leukemia. First documented in 1848, multiple myeloma is a disease characterized by a clonal proliferation of malignant B cells in the bone marrow (in which the predominant cell type is plasma cells) that results in an overabundance of monoclonal paraprotein. It is predominantly a disease of the elderly (median age: 60 years), with an incidence of 9.5 per 100,000 population and a slight male predominance.

It is the second most common hematologic cancer (10%), representing 1% of all cases of cancer; despite new advances,multiple myeloma still carries a poor prognosis, with a median survival of 2-3 years. The pathophysiology of multiple myeloma is that of a chromosomal translocation between the immunoglobulin heavy-chain gene (on the 14th chromosome, locus 14q32) and an oncogene (often 11q13, 4p16.3, 6p21, 16q23, or 20q11). This mutation results in dysregulation of the oncogene, which is thought to be an important initiating event in the pathogenesis of myeloma. The result of this mutation is proliferation of a plasma cell clone and genomic instability that leads to further mutations and translocations. The chromosome 14 abnormality is observed in about 50% of all myeloma cases; the other 50% of cases result from a deletion of (parts of) the 13th chromosome. The resulting plasma cells produce cytokines (especially interleukin [IL]-6) that cause osteopenia and create an environment for malignant cells to thrive.

DOWNLOAD COMPLETE PDF HERE

Read More..

Management of Severe COPD Reviewed
Laurie Barclay, MD, Hien T. Nghiem, MD

April 20, 2010 Various strategies and recommendations to treat patients with severe chronic obstructive pulmonary disease (COPD) are provided in a clinical review published in the April 15 issue of the New England Journal of Medicine.

"The sentinel clinical feature of severe ...COPD is dyspnea on exertion," writes Dennis E. Niewoehner, MD, from the Pulmonary Section, Veterans Affairs Medical Center in Minneapolis, Minnesota. "Its onset is usually insidious, and it may progress to severe disability over a period of years or decades. Other common symptoms include cough, sputum production, wheezing, and chest congestion." The typical clinical manifestations of advanced COPD result from severe airflow obstruction, which can be confirmed by spirometry. Although physical findings may include a barrel-shaped chest, inspiratory retraction of the lower ribs (Hoover's sign), a prolonged expiratory phase, and use of the accessory muscles of respiration, these findings are sometimes absent even in patients with severe COPD.

Failure to confirm COPD with spirometry often leads to misdiagnosis. However, spirometry is a poor guide for decision making regarding treatment continuation or modification in an individual patient. Spirometric evidence of airflow obstruction is defined as a ratio of the postbronchodilator forced expiratory volume in 1 second (FEV1) to a forced vital capacity of less than 0.70.

Overall severity of COPD can be classified based on FEV1 percentage of the predicted normal value, as well as on clinical criteria, such as the degree of breathlessness caused by specific tasks and the frequency of exacerbations. Exacerbations often require medical visits and hospitalizations, causing a dramatic increase in healthcare costs. The relative risk for treatment failure (defined as no resolution or clinical deterioration) is lowered by approximately 50% when antibiotics are used for COPD exacerbations. Antibiotics are most effective in patients who have cough productive of purulent sputum.

Complications of severe COPD include pulmonary hypertension and cor pulmonale resulting from chronic hypoxemia and hypercapnia. Severe COPD is also associated with an elevated risk for cardiovascular disease, osteoporosis, lung cancer, depression, and other systemic diseases.

Management Strategies
Management should include patient education during the initial visit, which should focus on the signs and symptoms of a severe exacerbation and the need for prompt recognition and treatment. The most important aspect of management is smoking cessation, which should be addressed at every visit, as long as the patient continues smoking.

Pharmacotherapy may include an inhaled long-acting
â2-agonist, an inhaled long-acting anticholinergic agent, and/or an inhaled corticosteroid. The long-acting â2-agonists salmeterol and formoterol offer at least 12 hours of sustained bronchodilation, whereas the inhaled long- acting anticholinergic agent tiotropium is effective for at least 24 hours.

Drugs from 2 of these 3 classes should be combined for patients with severe, exacerbation-prone COPD. Because they lower the relative risk for a severe exacerbation by 15% to 20%, these medications should be continued even if they do not provide symptomatic relief. Adverse events of long-acting bronchodilators are typically mild.

For rescue use, a short-acting bronchodilator should be given. Albuterol or other short-acting
â2-adrenergic agonist and ipratropium bromide, a short-acting anticholinergic agent, may be used alone or combined. Patients should be instructed regarding proper inhaler technique. The faster onset of action of albuterol vs ipratropium bromide may give patients more rapid relief. Long-term oxygen therapy should be prescribed and used for 18 hours or more each day if arterial oxygen saturation is 88% or lower at rest in a stable clinical state.

DOWNLOAD COMPLETE PDF HERE

Read More..

The Curious Case of a Child Who Looked Like a Spotted Dog
Ronald M. Cyr, MD

Case Report
In the early part of August 1841, I was requested to attend the wife of an innkeeper, in labour with her eighth child. She was a strong, healthy woman, aged about forty-seven. The os uteri was dilated to the size of half a crown [the Crown was a silver coin measuring 39 mm in diameter and worth 5 Shilling (1 £ = 20 Shilling)]; membranes unruptured; vertex presentation. The labor being tedious, it was necessary to rupture the membranes, which, from their toughness, and not yielding to the fingernail, was effected by a quill. There was much mental excitement during the greater part of the labor.

Fearing cerebral congestion, and from the rigid state of the os uteri and the perineum, it was almost decided that venesection [phlebotomy] should be had recourse to. Though repeatedly told all was right, she persisted in a contrary opinion. As her pains increased, so did her ideas, that her child was like the spotted dog by which she had been frightened in the kitchen; as it was always before her eyes, night and day. (These were the words of the patient.) She had scarcely uttered those words, when, by a powerful contraction of the uterus, a fine full-grown female child was expelled.

Before the child was taken from under the bedclothes, the patient distinctly said these words in the presence of the nurse and a second attendant "My child is marked like Troughton's dog (the spotted), and at the back of the neck where the black one held it." On bringing the child to the light, such was the fact; only three or four spots about the size of a sixpence on the face, the rest of the body beautifully marked with black spots varying from the size of a pea to that of a sixpence, with the exception of the back of the neck, which had a brown black appearance covered with hairs, extending about two inches and a half across the neck and shoulders, and one inch and a half down the back. It appears from the patient's statement, that about the period of her third month of pregnancy, she was crossing the kitchen with a pint of beer, when a black dog and a spotted terrier, then lying under the table, began to fight close to her feet; and in the fright turning round, she saw the black dog seize the other by the back of the neck: a chillness came over her, and she felt ill all the day.

What is singular, her last two children were born marked from mental impressions made (as she believed) about the third month of each pregnancy; therefore, she was more convinced that she was to have a spotted child this time. The child is living, and very much admired. The spotted dog frequently passes my house; many persons call at the inn for a pint of beer as an excuse to see the rare spotted lass.

DOWNLOAD COMPLETE PDF HERE.

Read More..

Hormone Therapy for Menopause Reviewed CME/CE
Laurie Barclay, MD, Penny Murata, MD

April 8, 2010 Women must be informed of the potential benefits and risks of all treatment options for menopausal symptoms and concerns and should receive individualized care, according to a review of the role of perimenopausal hormone therapy published in the April issue of Obstetrics & Gynecology.

"With the first publication of the results of the Women's Health Initiative (WHI) trial in 2002, the use of HT [hormone therapy] declined dramatically," write Jan L. Shifren, MD, and Isaac Schiff, MD, from Harvard Medical School and Massachusetts General Hospital in Boston. "Major health concerns of menopausal women include vasomotor symptoms, urogenital atrophy, osteoporosis, cardiovascular disease, cancer, cognition, and mood.... Given recent findings, specifically regarding the effect of the timing of HT initiation on coronary heart disease [CHD] risk, it seems appropriate to reassess the clinician's approach to menopause in the wake of the recent reanalysis of the WHI."

Many therapeutic options are currently available for management of quality of life and health concerns in menopausal women. Treatment of vasomotor hot flushes and associated symptoms is the main indication for hormone therapy, which is still the most effective treatment of these symptoms and is currently the only US Food and Drug Administrationapproved option. For healthy women with troublesome vasomotor symptoms who begin hormone therapy at the time of menopause, the benefits of hormone therapy generally outweigh the risks.

However, hormone therapy is associated with a heightened risk for coronary heart disease. Based on recent analyses, this higher risk is attributable primarily to older women and to those who reached menopause several years previously. Hormone therapy should not be used to prevent heart disease, based on these analyses. However, this evidence does offer reassurance that hormone therapy can be used safely in otherwise healthy women at the menopausal transition to manage hot flushes and night sweats.

Although hormone therapy may help prevent and treat osteoporosis, it is seldom used solely for this indication alone, particularly if other effective options are well tolerated. Short-term treatment with hormone therapy is preferred to long-term treatment, in part because of the increased risk for breast cancer associated with extended use. The lowest effective estrogen dose should be given for the shortest duration required because risks for hormone therapy increase with advancing age, time since menopause, and duration of use.

Low-dose, local estrogen therapy is recommended vs systemic hormone therapy when only vaginal symptoms are present. Alternatives to hormone therapy should be recommended for women with or at increased risk for disorders that are contraindications to hormone therapy use. These include breast or endometrial cancer, cardiovascular disease, thromboembolic disorders, and active hepatic or gallbladder disease.

In addition to estrogen therapy, progestin alone, and combination estrogen-progestin therapy, there are several nonhormonal options for the treatment of vasomotor symptoms. Lifestyle interventions include reducing body temperature, maintaining a healthy weight, stopping smoking, practicing relaxation response techniques, and receiving acupuncture. Although efficacy greater than placebo is unproven, nonprescription medications that are sometimes used for treatment of vasomotor symptoms include isoflavone supplements, soy products, black cohosh, and vitamin E.

There are several nonhormonal prescription medications sometimes used off-label for treatment of vasomotor symptoms, but they are not approved by the Food and Drug Administration for this purpose. These drugs, and their accompanying potential adverse effects, include the following :
  • Clonidine, 0.1-mg weekly transdermal patch, with potential adverse effects including dry mouth, insomnia, and drowsiness.
  • Paroxetine (10 - 20 mg/day, controlled release 12.5 - 25 mg/day), which may cause headache, nausea, insomnia, drowsiness, or sexual dysfunction.
  • Venlafaxine (extended release 37.5 - 75 mg/day), which is associated with dry mouth, nausea, constipation, and sleeplessness.
  • Gabapentin (300 mg/day to 300 mg 3 times daily), with possible adverse effects of somnolence, fatigue, dizziness, rash, palpitations, and peripheral edema.

"Women must be informed of the potential benefits and risks of all therapeutic options, and care should be individualized, based on a woman's medical history, needs, and preferences," the review authors write. "For women experiencing an early menopause, especially before the age of 45 years, the benefits of using HT until the average age of natural menopause likely will significantly outweigh risks. The large body of evidence on the overall safety of oral contraceptives in younger women should be reassuring for those experiencing an early menopause, especially given the much lower estrogen and progestin doses provided by HT formulations."

Dr. Shifren serves as a scientific advisory board member for the New England Research Institutes. She has been a research study consultant for Eli Lilly & Co and Boehringer Ingelheim and has received research support from Proctor & Gamble Pharmaceuticals.

DOWNLOAD COMPLETE PDF HERE.

Read More..

Physical activity in pregnancy: a qualitative study of the beliefs of overweight and obese pregnant women
Zoe Weir, Judith Bush, Stephen C Robson, Catherine McParlin, Judith Rankin, Ruth Bell

ABTRACT
Background
Whilst there has been increasing research interest in interventions which promote physical activity during pregnancy few studies have yielded detailed insights into the views and experiences of overweight and obese pregnant women themselves. The qualitative study described in this paper aimed to: (i) explore the views and experiences of overweight and obese pregnant women; and (ii) inform interventions which could promote the adoption of physical activity during pregnancy.

Methods
The study was framed by a combined Subtle Realism and Theory of Planned Behaviour (TPB) approach. This enabled us to examine the hypothetical pathway between beliefs and physical activity intentions within the context of day to day life. The study sample for the qualitative study was chosen by stratified, purposive sampling from a previous study of physical activity measurements in pregnancy.

Research participants for the current study were recruited on the basis of Body Mass Index (BMI) at booking and parity. Semi-structured, in-depth interviews were conducted with 14 overweight and obese pregnant women. Data analysis was undertaken using a Framework Approach and was informed by TPB.

Results Healthy eating was often viewed as being of greater importance for the health of mother and baby than participation in physical activity. A commonly cited motivator for maintaining physical activity during pregnancy is an aid to reducing pregnancy-related weight gain. However, participants often described how they would wait until the postnatal period to try and lose weight. A wide range of barriers to physical activity during pregnancy were highlighted including both internal (physical and psychological) and external (work, family, time and environmental). The study participants also lacked access to consistent information, advice and support on the benefits of physical activity during pregnancy.

Conclusions Interventions to encourage recommended levels of physical activity in pregnancy should be accompanied by accessible and consistent information about the positive effects for mother and baby. More research is required to examine how to overcome barriers to physical activity and to understand which interventions could be most effective for overweight/obese pregnant women. Midwives should be encouraged to do more to promote activity in pregnancy.

DOWNLOAD COMPLETE PDF HERE.

Read More..

Walking Protects Women Against Stroke: WHS Long-Term Follow-Up
Pam Harrison, Hien T. Nghiem, MD

April 13, 2010 Women who walk 2 or more hours a week, especially at a brisk pace, are significantly less likely to experience any type of stroke than women who do not walk, according to long-term follow-up findings from the Women's Health Study (WHS).

Findings were published online April 6 and will appear in the June issue of Stroke. Jacob Sattelmair, MSc, Harvard School of Public Health, Boston, Massachusetts, found that during an average follow-up of 11.9 years, walking time and walking pace were inversely related, either significantly or with borderline significance, to total, ischemic, and hemorrhagic stroke risk among 39,315 healthy US women 45 years and older who participated in the WHS.

Specifically, women who walked 2 hours or more per week had a 30% lower risk for any stroke than women who did not walk, whereas women whose usual walking pace was brisk (> 4.8 km/hour) had a 37% lower risk for any stroke compared with women who did not walk.

Women who walked 2 hours or more per week also had a 57% lower risk for hemorrhagic stroke compared with women who did not walk, whereas women whose usual walking pace exceeded 4.8 km/hour had a 68% lower risk for hemorrhagic stroke than women who did not walk.

Interesting, vigorous physical activity was not related to stroke risk in the same study. There was an inverse association between total leisure-time physical activity and risk for total and ischemic stroke, but the association was of borderline significance. Nevertheless, women who were most active in leisure-time activities were 17% less likely to have any type of stroke than the least active women.

"This was a cohort of female health professionals, predominantly white, so they may not be representative of all middle-aged women in the US, but there really is no obvious reason to suggest that findings would necessarily be different in other populations," Mr. Sattelmair told Medscape Neurology. "I think the overall take-home in terms of stroke prevention is that regular physical activity is essential to minimize the risk of cardiovascular disease."

Observational Study
On scheduled completion of the WHS in March 2004, women participating were invited to continue in a follow-up observational study; 88% of the WHS cohort who were still alive did. At baseline, women were asked to estimate the average time spent on 8 groups of recreational activities during the past year: walking or hiking; jogging; running; bicycling; aerobic exercise; aerobic dance; use of exercise machines; tennis, squash, or racquetball; lap swimming; and lower-intensity exercise, including yoga.

"They also reported their usual walking pace," the investigators added. The cohort was stratified into those who do not walk regularly and those who walk at a pace of less than 3.2 km/hour, 3.2 to 4.7 km/hour (considered to be an average pace), 4.8 to 6.3 km/hour (considered to be brisk pace), or 6.4 or more km/hour (a striding pace). During follow-up, 579 total strokes occurred: 473 ischemic strokes, 102 hemorrhagic strokes, and 4 strokes of unknown type.

"There was no overall linear trend of decreased risk for total stroke across categories of vigorous activity...and findings for ischemic stroke again mirrored those for total stroke," the investigators observe. Neither age nor body mass index modified the relationship between physical activity and stroke risk.

DOWNLOAD COMPLETE PDF HERE.

Read More..

Single Dose of Aspirin Effective in Relieving Migraine Pain
Lisa Nainggolan, Charles P. Vega, MD

April 19, 2010 A single 1000-mg dose of aspirin is an effective treatment of acute migraine headaches for more than half of people who take it, and the addition of 10 mg of metoclopramide may reduce nausea, according to the findings of a literature review published online April 14 in the Cochrane Database of Systematic Reviews.

"Aspirin plus metoclopramide would seem to be a good first-line therapy for acute migraine attacks in this population," write Varo Kirthi, MD, and colleagues, with the Pain Research and the Nuffield Department of Anaesthetics at the John Radcliffe Hospital, in Oxford, United Kingdom.

The researchers searched Cochrane CENTRAL, MEDLINE, EMBASE, and the Oxford Pain Relief Database for studies through March 10, 2010. The 13 selected studies, including 4222 participants, were randomized, double-blind, placebocontrolled, or active-controlled; evaluated the use of aspirin to treat a single migraine headache episode; and included at least 10 participants per treatment group. In addition, studies compared aspirin 900 mg or 1000 mg (alone or in combination) and metoclopramide 10 mg vs placebo or other active comparators (typically sumatriptan 50 mg or 100 mg).

Compared with placebo, aspirin reduced associated symptoms of nausea, vomiting, photophobia, and phonophobia. A single 1000-mg dose of aspirin reduced pain from moderate or severe to no pain by 2 hours in 24% of people vs 11% taking placebo. Severe or moderate pain was reduced to no worse than mild pain by 2 hours in 52% taking aspirin vs 32% taking placebo. Headache relief at 2 hours was sustained for 24 hours more often with aspirin vs placebo.

In addition, metoclopramide, when combined with aspirin, significantly reduced nausea (P < .00006) and vomiting (P = .002)vs aspirin alone, although it had minimal effect on pain. Fewer participants taking aspirin needed rescue medication vs those taking placebo. Adverse events were reported more often with aspirin vs placebo but were mostly mild and transient.

The review also found that aspirin alone was comparable to the prescription medication sumatriptan 50 mg for 2-hour pain-free relief and headache relief, whereas sumatriptan 100 mg was superior to aspirin plus metoclopramide for 2-hour pain-free, but not headache, relief; no data comparing sumatriptan with aspirin for 24-hour headache relief were available.

"Aspirin plus metoclopramide will be a reasonable therapy for acute migraine attacks, but for many it will be insufficiently effective," noted study author R. Andrew Moore, DSc, in a written release. "We are presently working on reviews of other OTC [over-the-counter] medicines for migraines, to provide consumers with the best available evidence on treatments that dont need a prescription," he said.

DOWNLOAD COMPLETE PDF HERE.

Read More..

Largest Study to Date Links Chocolate to Lower BP and CV Risk
Lisa Nainggolan, Charles P. Vega, MD

April 8, 2010 The largest observational study so far to examine the association between chocolate consumption and risk of cardiovascular disease has found that those who ate the most chocolate--around 7.5 g per day--had a 39% lower risk of myocardial infarction (MI) and stroke than individuals who ate almost no chocolate (1.7 g per day).

Lead author Dr Brian Buijsse (German Institute of Human Nutrition, Nuthetal, Germany) told heartwire : "This shows that habitual consumption of chocolate is related to a lower risk of heart disease and stroke that is partly explained by bloodpressure reduction. The risk reduction is stronger for stroke than for MI, which is logical because it appears that chocolate and cocoa have a pronounced effect on BP [blood pressure], and BP is a higher risk factor for stroke than for MI." Buijsse and colleagues report their findings online March 31, 2010 in the European Heart Journal.

However, Buijsse cautions that only small amounts of chocolate were associated with the benefits and it is too early to give recommendations on chocolate consumption: "Maybe it's a boring message, but it's a little too early to come up with recommendations, because chocolate contains so many calories and sugar, and obesity is already an epidemic. We have to be careful." However, he added, that if people did want to treat themselves, they would be better off choosing small amounts of chocolate, preferably dark chocolate, over other sweet snacks. "We know it is the cocoa content in chocolate that is important, so the higher the cocoa content, the better."

Dr Steffen Desch (University of Leipzig, Heart Center, Germany), who wasnot involved with this study but who has performed research on the effects of chocolate on blood pressure, told heartwire : "This is an interesting study that adds to the growing body of evidence that flavanol-rich chocolate might be associated with health benefits. Several epidemiological studies (including the Zuphten Elderly Study, by the same first author) and even more physiological trials have been published before."

"What is missing now is a large-scale randomized trial of flavanol-rich chocolate versus control. The most reasonable end point would probably be the change in blood pressure between groups." However, Desch added, "the major problems in designing such a study are the lack of funding and finding an appropriate control substance. To the best of my knowledge, there is no commercially available flavanol-free chocolate that offers the distinct bitter taste and dark color inherent to cocoa-rich chocolate."

Biggest Chocolate Consumers Had Lowest Blood Pressure
Buijsse and colleagues followed 19 357 people, aged between 35 and 65, who were participants in the Potsdam arm of the European Prospective Investigation into Cancer (EPIC). They received medical checks, including blood pressure and height and weight measurements at the start of the study (19941998), and they also answered questions about their diet, lifestyle, and health, including how frequently they ate 50-g bars of chocolate.

The research was conducted before the health benefits of chocolate and cocoa were recognized, so no differentiation was made between milk, dark, and white chocolate in the study. But in a subset analysis of 1568 participants later asked to recall their chocolate intake over a 24-hour period, 57% ate milk chocolate, 24% dark chocolate, and 2% white chocolate.

Participants were divided into quartiles according to their level of chocolate consumption. Those in the top quartile, eating around 7.5 g of chocolate a day, had blood pressure that was about 1 mm Hg (systolic) and 0.9 mm Hg (diastolic) lower than those in the bottom quartile. In follow-up questionnaires, sent out every two or three years until December 2006, the participants were asked whether they had had a heart attack or stroke, information that was subsequently verified by medical records from general physicians or hospitals. Death certificates from those who had died were also used to identify MIs and strokes.

"Our hypothesis was that because chocolate appears to have a pronounced effect on blood pressure, chocolate consumption would lower the risk of strokes and heart attacks, with a stronger effect being seen for stroke, explained Buijsse.

DOWNLOAD COMPLETE PDF HERE.

Read More..

Skin prick testing in patients using beta-blockers : a retrospective analysis
Irene N Fung1, Harold L Kim

Abstract
Rationale: The use of beta-blockers is a relative contraindication in allergen skin testing yet there is a paucity of literature on adverse events in this circumstance. We examined a population of skin tested patients on betablockers to look for any adverse effects.

Methods: Charts from 2004-2008 in a single allergy clinic were reviewed for any patients taking a beta-blocker when skin tested. Data was examined for skin test reactivity, type of skin test, concomitant asthma diagnosis, allergens tested, and adverse events.

Results: One hundred and ninety-one patients were taking beta-blockers when skin testing occurred. Seventy-two patients had positive skin tests. No tests resulted in an adverse event.

Conclusions: This data demonstrates the relative safety of administrating of skin prick tests to patients on betablocker treatment. Larger prospective studies are needed to substantiate the findings of this study.

Introduction
Beta antagonists, commonly known as beta-blockers, are a commonly prescribed class of medications. Beta-blockers are used in the treatment of congestive heart failure, coronary heart disease, cardiac arrhythmia, hypertension, tremor, glaucoma, and migraine headache. Importantly, beta-blockers significantly reduce both morbidity and mortality rates in congestive heart failure, in acute coronary syndrome, and post myocardial infarction.

However, beta-blockade may place atopic subjects at an increased risk of an anaphylactic reaction. Case reports suggest that when systemic allergic reactions occur secondary to immunotherapy, drugs, foods, and insects stings, they may be of greater severity in patients taking beta-blockers. Due to the potential of beta-blockers to amplify the effects of anaphylaxis, these drugs are relatively contraindicated during allergy skin testing. The American Academy of Allergy Asthma & Immunology (AAAAI) outlines this in its position statement, stating that Systemic reactions to skin testing are rare. Nevertheless, special precautions, when these are appropriate, should be taken when the patient who needs sensitivity testing for IgE-mediated disease cannot stop treatment with a beta-blocking agent.

However, in our literature review on the topic, no case reports or prospective studies report adverse events in patients on beta-blockers who underwent skin testing. This retrospective study investigates whether there is any increased risk of anaphylaxis in patients who were allergy skin tested while they continued on a beta-blocker medication.

DOWNLOAD COMPLETE PDF HERE

Read More..

The anti-inflammatory effects of levocetirizine - are they clinically relevant or just an interesting additional effect?
Garry M Walsh

Abstract
Levocetirizine, the R-enantiomer of cetirizine dihydrochloride has pharmacodynamically and pharmacokinetically favourable characteristics, including rapid onset of action, high bioavailability, high affinity for and occupancy of the H1-receptor, limited distribution, minimal hepatic metabolism together with minimal untoward effects. Several wellconducted randomised clinical trials have demonstrated the effectiveness of levocetirizine for the treatment of allergic rhinitis and chronic idiopathic urticaria in adults and children. In addition to the treatment for the immediate shortterm manifestations of allergic disease, there appears to be a growing trend for the use of levocetirizine as long-term therapy. In addition to its being a potent antihistamine, levocetirizine has several documented anti-inflammatory effects that are observed at clinically relevant concentrations that may enhance its therapeutic benefit. This review will consider the potential or otherwise of the reported anti-inflammatory effects of levocetirizine to enhance its effectiveness in the treatment of allergic disease.

Introduction
The effects of histamine are exerted through three well defined classical G protein coupled histamine receptor subtypes termed H1R, H2R, and H3R and the more recently described H4R. Histamine signalling through H1R is responsible for the majority of the immediate manifestations of allergic disease. Levocetirizine (Xyzal) is the single R-isomer of the racemic mixture piperazine H1R-antagonist cetirizine dihydrochloride in a once-daily 5mg formulation. The parent compound cetirizine (Zyrtec), a once-daily 10 mg formulation, is also an effective treatment for allergic disease being the most-widely used second-generation antihistamine worldwide. Levocetirizine is a selective, potent, oral histamine H1R antagonist that is licensed in Europe as tablets and oral solution for use in adults and children over 2 years of age for the symptomatic treatment of allergic rhinitis (including persistent allergic rhinitis) and chronic idiopathic urticaria.

More recently, levocetirizine tablets under the trade name Xyzal have been approved by the Food and Drug Administration for use in adults and children over 6 years of age in the United States.

Efficacy and safety
Levocetirizine is a potent antihistamine as demonstrated by its ability to inhibit cutaneous histamine-induced itching and the wheal and flare reaction. The histamineinduced wheal and flare model in human skin is a widelyused reproducible and standardized methodology that gives an objective measure of the effectiveness of antihistamines in human subjects, together with any differences in onset and duration of action. The majority of these studies found levocetirizine to be the most potent of the antihistamines tested, including the parent compound cetirizine. Large, well designed controlled clinical trials have demonstrated the efficacy of levocetirizine in adults with allergic rhinitis and chronic idiopathic urticaria, while well conducted studies have demonstrated levocetirizine to be safe and effective in young children with atopic rhinitis or chronic urticaria.
Levocetirizine appears to have significant effects on nasal blockage. The positive effects on nasal congestion are important findings as many antihistamines are ineffective in this regard. Indeed, histamine is not thought to be the primary cause of nasal congestion but a consequence of other mast cell-derived mediators including prostaglandin D2 and leukotrienes acting in concert. The positive effect by levocetirizine on this important symptom of AR is likely due to its additional anti-inflammatory properties (see below).

In terms of its pharmacological profile levocetirizine exhibits rapid absorption and high bioavailability giving a fast onset and long duration of antihistaminic effect. These observed effects are mirrored by calculations of histamine H1 receptor occupancy that show a rapid and long-lasting presence of levocetirizine at its site of action. In terms of safety levocetirizine exhibits a low potential for drug interactions together with a lack of effect on cognition, psychomotor function and the cardiovascular system. Indeed a recent study examined the sedative potential of a comprehensive battery of first, second and newer generation antihistamines (levocetirizine, desloratadine and levocetirizine) by calculating a proportional impairment ratio for each drug based on studies that used standardised objective methodology and psychometric tests. Levocetirizine had the lowest proportional impairment ratio of all the antihistamines reviewed, followed by fexofenadine and desloratadine respectively.

DOWNLOAD COMPLETE PDF HERE

Read More..

Treating rhinitis in the older population: special considerations
Raymond G Slavin

Abstract
Rhinitis in the elderly is a common but often neglected condition. Structural changes in the nose associated with aging, predisposes the elderly to rhinitis. There are a number of specific factors that affect medical treatment of the elderly including polypharmacy, cognitive dysfunction, changes in body composition, impairment of liver and renal function and the cost of medications in the face of limited resources. Rhinitis in the elderly can be placed in several categories and treatment should be appropriate for each condition. The most important aim is to moisten the nasal mucosa since the nose of the elderly is so dry. Great caution should be used in treatment with first generation antihistamines and decongestants. Medications generally well tolerated by the elderly are second generation antihistamines, intra-nasal anti-inflammatory agents, leukotriene modifiers and iprapropium nasal spray.

Rhinitis is a common and bothersome condition in the elderly. Despite its importance, little attention is paid in the general medical literature. In the most recently published highly regarded geriatric text, rhinitis is not included in the index whereas rhinophyma is. The number of Americans older than 65 years of age will increase from 35 million to 86 million by the year 2050. While the exact number of elderly patient with rhinitis is not known, it is believed that 40% of the general population experiences nasal symptoms. It would be safe to say that the many changes that occur in the connective tissue and vasculature of the nose predisposes aging individuals to chronic rhinitis making the percentage of the elderly with nasal symptoms significantly higher than the general population.

The elderly have generalized decrease in body water content and, along with a degeneration of mucous-secreting glands; the effectiveness of the mucociliary system is reduced, resulting in symptoms of nasal stuffiness. In addition, a decrease in nasal blood flow leads to atrophy and drying of the nasal mucous membrane and increased mucous viscosity. Structural changes in the nose with age include atrophy of the collagen fibers and loss of elastic fibers in the dermis. Weakening of the upper and lower nasal cartilage, retraction of the nasal columella, and downward rotation of the nasal tip contribute to an increase in nasal airway resistance.

This article will deal with the special considerations of treating rhinitis in the older population. Appendix 1 lists the specific factors that may affect general medical treatment in the elderly. The elderly patient is frequently being treated for a variety of medical conditions with a number of medications. The more medications that are prescribed the less likely the patient is to comply. Aside from complying with directions for a large number of medications, the elderly patient frequently has cognitive dysfunction with a resultant decrease in memory.

A number of changes in body composition associated with growing older may effect distribution of particular medications. These changes include decrease in muscle mass, fat and body water. Medications metabolized through the liver and kidney may be affected by decrease in function of the organ systems. Finally, many elderly patients have limited financial resources and may simply not be able to afford the cost of the prescribed medications.

DOWNLOAD COMPLETE PDF HERE

Read More..

Introduction of oral vitamin D supplementation and the rise of the allergy pandemic
Matthias Wjst

Abstract
The history of the allergy pandemic is well documented, enabling us to put the vitamin D hypothesis into its historical context. The purpose of this study is to compare the prevalence of rickets, vitamin D supply, and allergy prevalence at 50-year intervals by means of a retrospective analysis of the literature since 1880. English cities in 1880 were characterized by an extremely high rickets prevalence, the beginning of commercial cod liver oil production, and the near absence of any allergic diseases. By 1930 hay fever prevalence had risen to about 3% in English-speaking countries where cod liver oil was preferentially used for the treatment of rickets. In 1980 vitamin D was used nation-wide in all industrialized countries as supplement to industrial baby food, thus eradicating nearly all cases of rickets. At the same time the allergy prevalence reached an all-time high, affecting about 30% of the population.

Time trends are therefore compatible with the vitamin D hypothesis although direct conclusions cannot be drawn. It is interesting, however, to note that there are at least two earlier research papers linking synthesized vitamin D intake and allergy (Reed 1930 and Selye 1962) published prior to the modern vitamin D hypothesis first proposed in 1999.

The vitamin D allergy hypothesis attributes the initial sensitization against allergens during the newborn period to immunological side effects of vitamin D supplements used for rickets prevention. The increasing interest in the vitamin D hypothesis is understandable because all otherhypotheses about the origin of the allergy epidemic have largely failed to provide any clear answers. Moreover, none of the current hypotheses have ever been tested for compatibility with the historical development of the allergy pandemic.

It may therefore be interesting to examine historical data on vitamin D intake and prevalence of allergy. As chosen method, a systematic analysis of articles published in Pubmed since 1950 was combined with a full-text search of all issues of Science and Nature since 1869. Furthermore, current Google book content was searched in addition to a manual search of textbooks for the keywords vitamin D (and chemical analogues) and allergy between 1920 until 1950 (see also acknowledgments).

Allergic manifestations were so rare in 1880 that today they would be considered an "orphan disease". This may reflect a recognition bias in a community that was understandably preoccupied with more pressing, lifethreatening conditions such as cholera, tuberculosis, typhoid and measles. Nevertheless, allergic symptoms were clearly described at that time. The few studies on allergic diseases from the 19th century all rely on a limited number of cases. The British doctor Harrison Blackley wrote in his 1873 book "Hay Fever: Its causes, treatment, and effective prevention": "Even in this country, where the disorder probably had its commencement and where it is still more common than in any other part of Europe, there are medical men to be found who know very little about it; and on the Continent there are still some to be found who have never even heard of the disease". The origins of the disease are vague.

The first formal description of hay fever is usually ascribed to John Bostock, who presented his own case in 1819 to the London Medico-Chirurgical Society. Another description was made in 1859 when the German professor Philipp Phoebus from Giessen published the first large allergy study, which was based on 158 cases. The sample consisted of patients from many hospitals because allergy was such a rare disease. In 1876 the American physician George Beard, a contemporary of Blackley, assembled only 100 patients. At the end of the 19th century, allergy prevalence may therefore be estimated at 0.1% in England, as well as in the United States of America. In continental Europe, it was not until 1906 that the term "Allergie" was introduced by Clemens von Pirquet.

DOWNLOAD COMPLETE PDF HERE

Read More..

The role of Probiotics in allergic diseases
Sonia Michail

Abstract
Allergic disorders are very common in the pediatric age group. While the exact etiology is unclear, evidence is mounting to incriminate environmental factors and an aberrant gut microbiota with a shift of the Th1/Th2 balance towards a Th2 response. Probiotics have been shown to modulate the immune system back to a Th1 response. Several in vitro studies suggest a role for probiotics in treating allergic disorders. Human trials demonstrate a limited benefit for the use of probiotics in atopic dermatitis in a preventive as well as a therapeutic capacity. Data supporting their use in allergic rhinitis are less robust. Currently, there is no role for probiotic therapy in the treatment of bronchial asthma. Future studies will be critical in determining the exact role of probiotics in allergic disorders.


Introduction
Currently, an estimated 20% of the population worldwide is suffering from some form of allergic disorder with a prevalence that continues to rise. For example, the prevalence of childhood asthma in the USA increased by 50% from 1980 to 2000. Atopic diseases involve Th2 responses to allergens. These clinical disorders are characterized by immediate hypersensitivity. Although the exact etiology of allergic diseases remains ambiguous, many investigators have proposed that environmental exposures may be major trigger factors in the development of allergic diseases.

As the rise in prevalence of allergic diseases has been seen mostly in industrialized countries, this led investigators to formulate the hygiene hypothesis in an attempt to explain the basis of the disease. This hypothesis entails that reduced family size and childhood infections have lowered our exposure to microbes, which play a crucial role in the maturation of the host immune system during the first years of life. In addition to environmental factors, the intestinal flora may be a contributor to allergic disease due to its substantial effect on mucosal immunity. Allergic responses are thought to arise if there is absence of microbial exposure while the immune system is still developing. Exposure to microbial flora early in life allows for a change in the Th1/Th2 balance, favoring a Th1 cell response. Several reports suggest that the make-up of intestinal microflora can be different in individuals with allergic disorders and in those who reside in industrialized countries where the prevalence of allergy is higher. For example, children from an industrialized country like Sweden harbor less Lactobacilli and Bifidobacteria (and more Staphylococcus aureus and Clostridia) in their bowels in comparison to children who live in countries like Estonia where allergic disorders are not as common.

The concept that children with allergic disorder harbor a different profile of microflora has been supported by several other studie. Perhaps the most convincing of these is the KOALA study, which examined flora of 957 infants in the Netherlands. The study revealed that C. dificile colonization at one month of age was associated with an increased likelihood of eczema, recurrent wheezing, and atopic dermatitis. E. coli colonization was associated with eczema rather than recurrent wheezing or atopic dermatitis. No association with bifidobacteria colonization, B. fragilis or lactobacilli colonization was observed.

While this concept has been validated in several other studies, there are a few reports that do not show a significant difference in microflora composition. A recent study comparing microflora composition of 324 European infants showed no association between food sensitization or atopic dermatitis and the intestinal bacteria. In general, however, most studies suggest that an association exists.


Read More..

Diagnostic evaluation of food-related allergic diseases
John Eckman1, Sarbjit S Saini1 and Robert G Hamilton

Abstract
Food allergy is a serious and potentially life-threatening problem for an estimated 6% of children and 3.7% of adults. This review examines the diagnostic process that begins with a patient's history and physical examination. If the suspicion of IgE-mediated food allergy is compelling based on the history, skin and serology tests are routinely performed to provide confirmation for the presence of food-specific IgE antibody. In selected cases, a provocation challenge may be required as a definitive or gold standard reference test for confirmation of IgE mediated reactions to food. Variables that influence the accuracy of each of the diagnostic algorithm phases are discussed. The clinical significance of food allergen-specific IgE antibody cross-reactivity and IgE antibody epitope mapping of food allergens is overviewed. The advantages and limitations of the various diagnostic procedures are examined with an emphasis on future trends in technology and reagents.


Introduction
Approximately 6% of children and 3.7% of adults experience IgE-mediated allergic symptoms following the ingestion of food. This contrasts with approximately 20% of the population that alters their diet for a perceived adverse reaction to food. The allergist has the challenge of accurately identifying immunologically and non-immunologically-mediated reactions in the setting of this perception using information provided by the patient's history, skin and serology testing for food-specific IgE and food challenges.

A number of general issues must be considered when reviewing studies on the diagnosis of food allergy. These considerations include the characteristics of the patientpopulation in individual studies, the instrumentation and interpretation of allergen-specific IgE skin and serology testing and variations in food challenge protocols. This review examines the diagnostic process that begins with a patient's history and physical examination. We will overview considerations involved in skin testing and then focus on specific IgE testing, which has become of paramount importance in both diagnosing and following the natural history of food allergy. We highlight potential problems with the "gold standard" of food allergy diagnosis, the double-blinded, placebo-controlled food challenge.

We then review the importance of considering cross-reactivity in the interpretation of skin testing and specific-IgE testing while discussing new technologies that may help decipher the degree of cross-reactivity. Finally, we mention the experimental studies of food-allergen epitope mapping in predicting the natural history of milk and egg allergy.

DOWNLOAD COMPLETE PDF HERE

Read More..

Diseases of the salivary glands in infants and adolescents
Maik Ellies, Rainer Laskawi

Abstract
Background
: Diseases of the salivary glands are rare in infants and children (with the exception of diseases such as parotitis epidemica and cytomegaly) and the therapeutic regimen differs from that in adults. It is therefore all the more important to gain exact and extensive insight into general and special aspects of pathological changes of the salivary glands in these age groups. Etiology and pathogenesis of these entities is still not yet fully known for the age group in question so that general rules for treatment, based on clinical experience, cannot be given, particularly in view of the small number of cases of the different diseases. Swellings of the salivary glands may be caused by acute and chronic inflammatory processes, by autoimmune diseases, by duct translocation due to sialolithiasis, and by tumors of varying dignity. Clinical examination and diagnosis has also to differentiate between salivary gland cysts and inflammation or tumors.

Conclusion: Salivary gland diseases are rare in childhood and adolescence. Their pattern of incidence differs very much from that of adults. Acute and chronic sialadenitis not responding to conservative treatment requires an appropriate surgical approach. The rareness of salivary gland tumors is particularly true for the malignant parotid tumors which are more frequent in juvenile patients, a fact that has to be considered in diagnosis and therapy.

Introduction
Diseases of the salivary glands are rare in infants and children (with the exception of diseases such as parotitis epidemica and cytomegaly) and the therapeutic regimen differs from that in adults. It is therefore all the more important to gain exact and extensive insight into general and special aspects of pathological changes of the salivary glands in these age groups. Previous studies have dealt with the clinical distribution pattern of the various pathological entities in infants and older children.

According to these studies, important pathologies in these age groups are acute and chronic sialadenitis (with special regard to chronic recurrent parotitis) and secondary inflammation associated with sialolithiasis. The etiology and pathogenesis of these entities in young patients, however, are still not yet sufficiently understood, so that therapeutic strategies based on extensive clinical experience cannot be defined, particularly in view of the small number of patients in the relevant age groups. The acute forms of sialadenitis are mainly caused by viral or bacterial infections. The predominant cause of parotid swelling in infancy is parotitis epidemica.

This disease has its peak incidence between the ages of 2 and 14. Acute inflammation of the parotid gland, with evidence of Staphylococcus aureus, is often seen in neonates and in children with an underlying systemic disease accompanied by fever, dehydration, immunosuppression and general morbidity. Acute inflammation of the submandibular gland, as opposed to that of the parotid is usually due to a congenital anomaly of a salivary duct or an excretory duct obstruction. Reports on sialolithiasis in infants and adolescents, however, are very scarce and are mostly presented as rarities in clinical case reports. For chronic sialadenitis the predominant etiological factors are secretion disorders and immunological reactions. The pathogenesis of chronic recurrent parotitis has still not been completely elucidated and is, next to mumps, the most frequent sialadenitis in infancy.

Neoplastic changes are very rare in children and adolescents, compared to salivary gland inflammations. Their annual incidence in all juvenile age groups is 1 to 2 tumor cases in 100,000 persons. According to Eneroth salivary gland tumors make up 0.3% of all human tumors, and less than 10% of all juvenile head and neck tumors are located in the salivary glands. Only 1% of all head and neck tumors originate in the salivary glands, regardless of patient age. Not only makes this low incidence the establishment of a generally applicable therapeutic regime difficult; this task is not made easier by the circumstance that not more than 5% of all salivary gland tumors are found in the age group of up to 16 years. As a consequence therapies very often lean on experience gained in the last decades from long-term studies for the treatment of adult patients. Primary dysgenetic, and secondary, acquired salivary gland cysts, and other malformations of the salivary glands have to be distinguished early and without doubt from specific benign and, above all, malignant lesions by pathohistological examination.

DOWNLOAD COMPLETE PDF HERE

Read More..

Non-allergic rhinitis: a case report and review
Cyrus H Nozad, L Madison Michael, D Betty Lew, Christie F Michael

Abstract
Rhinitis is characterized by rhinorrhea, sneezing, nasal congestion, nasal itch and/or postnasal drip. Often the first step in arriving at a diagnosis is to exclude or diagnose sensitivity to inhalant allergens. Non-allergic rhinitis (NAR) comprises multiple distinct conditions that may even co-exist with allergic rhinitis (AR). They may differ in their presentation and treatment. As well, the pathogenesis of NAR is not clearly elucidated and likely varied. There are many conditions that can have similar presentations to NAR or AR, including nasal polyps, anatomical/mechanical factors, autoimmune diseases, metabolic conditions, genetic conditions and immunodeficiency.

Here we present a case of a rare condition initially diagnosed and treated as typical allergic rhinitis vs. vasomotor rhinitis, but found to be something much more serious. This case illustrates the importance of maintaining an appropriate differential diagnosis for a complaint routinely seen as mundane. The case presentation is followed by a review of the potential causes and pathogenesis of NAR.

Introduction
The term rhinitis can be used to describe many distinct entities with varying pathogeneses, despite similar presentations. Generally, rhinitis is considered allergic if significant inhalant allergy is diagnosed and is considered non-allergic when symptomatology is perennial or periodic and not IgE mediated. Thus non-allergic rhinitis (NAR) comprises a mixed bag of conditions ranging from vasomotor rhinitis (VMR) to hormonally induced rhinitis.

Overall, rhinitis results in significant cost to the world population. In 2002, the direct and indirect costs for allergic rhinitis (AR) were estimated to be $7.3 billion and $4.28 billion, respectively. Given that an estimated 1 in 3 patients with rhinitis are diagnosed with NAR, with 19 million people in the United States alone, it is reasonable to conclude that NAR also results in a significant economic burden.

NAR is a condition primarily seen in adulthood with 70% of cases developing after the age of 20. There is a greater prevalence among females compared to males, and the overall prevalence of NAR in industrialized countries has ranged from 20-40%. The following case presentation is an example of a patient with typical NAR symptoms who fits the epidemiological profile, but who presented atypically, failed to respond to standard therapy and was subsequently found to have a much more serious underlying condition.

DOWNLOAD COMPLETE PDF HERE

Read More..

Acute mastoiditis: A one year study in the pediatric hospital of Cairo university
Mosaad Abdel-Aziz, Hassan El-Hoshy

Abstract
Background: Acute mastoiditis is a serious complication of acute otitis media especially in the pediatric age group. This study reports the authors experience in the treatment of children admitted with acute mastoiditis to the Pediatric Hospital of Cairo University throughout the year 2007, also we aimed to evaluate our current management of this serious disease.

Methods: Nineteen children were included in this study, 11 females and 8 males, their ages ranged from 9 months to 11 years. All children were treated with intravenous antibiotic on initial admission, myringotomy was considered for cases that did not respond to medical treatment for 48 hours, while cortical mastoidectomy (with myringotomy) was reserved for cases that presented initially with subperiosteal abscess with or without postauricular fistula, cases with intracranial complications and for cases that showed no response to myringotomy (after 48 hours). Follow up of the patients was carried out for at least 1 year.

Results: Medical management alone was enough in 5 cases (26%); all of them had erythematous tender mastoid on first presentation. Seven cases (37%) needed myringotomy; 2 of them showed no response and they needed cortical mastoidectomy and the other 5 cases responded well except for 1 case that developed post-auricular subperiosteal abscess 2 months later necessitating cortical mastoidectomy with no evidence of recurrence till the end of the follow-up period. Seven cases (37%) presented with subperiosteal abscess and they needed cortical mastoidectomy with myringotomy; they showed no recurrence till the end of the study.

Conclusion: Conservative management is an effective method in the treatment of non-complicated acute mastoiditis, but myringotomy should be considered if there is no response within 48 hours. Cortical mastoidectomy should be used in conjunction with the medical management in the treatment of complicated cases.

Background
Acute mastoiditis is a serious complication of acute otitis media (AOM). It is more common in the pediatric age group as most patients are younger than 4 years, this higher incidence in younger age group reflects the peak age for AOM, however its incidence has been decreased since the revolution of antibiotic therapy. Some recent literature indicated an increase of the disease incidence in last years especially in countries with less antibiotic prescription, while others reported that no increased incidence despite the national restriction guidelines of antibiotics prescription. The disease my cause significant and even life-threatening complications beyond the tympanomastoid system; including subperiosteal abscess, Bezolds abscess, facial paralysis, suppurative labyrinthitis, meningitis, epidural and subdural abscess, brain abscess, lateral sinus thrombophlebitis, and otitic hydrocephalus.

The treatment of acute mastoiditis is variable, ranging from conservative management in the form of parenteral antibiotic therapy to myringotomy (with or without ventilation tube placement) to a more aggressive intervention in the form of mastoidectomy. This study reports the authors experience in the treatment of children admitted with acute mastoiditis to the Pediatric Hospital of Cairo University throughout the year 2007, also we aimed to evaluate our current management of this serious disease in the pediatric population.

DOWNLOAD COMPLETE PDF HERE

Read More..

Evaluation of young smokers and non-smokers with Electrogustometry and Contact Endoscopy
Pavlidis Pavlos, Nikolaidis Vasilios, Anogeianaki Antonia, Koutsonikolas Dimitrios, Kekes Georgios and Anogianakis Georgios

Abstract
Background:
Smoking is the cause of inducing changes in taste functionality under conditions of chronic exposure. The objective of this study was to evaluate taste sensitivity in young smokers and non-smokers and identify any differences in the shape, density and vascularisation of the fungiform papillae (fPap) of their tongue.

Methods: Sixty-two male subjects who served in the Greek military forces were randomly chosen for this study. Thirty-four were non-smokers and 28 smokers. Smokers were chosen on the basis of their habit to hold the cigarette at the centre of their lips. Taste thresholds were measured with Electrogustometry (EGM). The morphology and density of the fungiform papillae (fPap) at the tip of the tongue were examined with Contact Endoscopy (CE).


Results: There was found statistically important difference (p < 0.05) between the taste thresholds of the two groups although not all smokers presented with elevated taste thresholds: Six of them (21%) had taste thresholds similar to those of non-smokers. Differences concerning the shape and the vessels of the fungiform papillae between the groups were also detected. Fewer and flatter fPap were found in 22 smokers (79%). Conclusion: The majority of smokers shown elevated taste thresholds in comparison to nonsmokers. Smoking is an important factor which can lead to decreased taste sensitivity. The combination of methods, such as EGM and CE, can provide useful information about the vascularisation of taste buds and their functional ability.

Background
Complete loss of taste is rather uncommon because the presence of four major afferent routes for taste provides substantial redundancy to the sensory communication for taste and a substantial back-up system in case of failure of any single nerve. There are two categories of taste measurement, whole mouth and regional tests. A preliminary evaluation of a patient suffering from taste disorders can be performed with the use of colourless solutions of sweet, bitter, sour and salt. More sophisticated is regional chemogustometry whereby chemicals are applied to part of the tongue usinga piece of filter paper or a cotton swab. Regional chemogustometry can also be performed using closed chambers cemented to the tongue.

The simplest regional test for evaluation of taste is EGM. EGM was introduced in the clinical assessment of taste sensitivity during the 1950s. Compared to tests based on chemical solutions, EGM is an efficient clinical tool, used in the evaluation of taste disorders caused by different factors such as middle-ear surgery, Bell's palsy, tumors, and tonsillectomy. Increased application of this method is due to its easiness, the short time required and its quantitative character. CE is a diagnostic technique suitable for head and neck screening. It was developed for observing cell construction in the epithelial surface. The first application of CE was in gynecology. The quality of the images and magnifications obtained with endoscopes, led to the application ofCE in otolaryngology. CE allows for both in vivo and in situ observations of pathology in the superficial layer of the tongue, nasal mucosa, vocal cords in laryngomicrosurgery and nasopharynx.

The effects of smoking on taste sensitivity and olfaction have been studied since the early 60's. However, up to day, only few experimental studies provide histological data about the effects of smoke on the size and shape of the tongue papillae. The aim of this study is to investigate if smokers and non-smokers differ in EGM thresholds on the anterior and posterior tongue and soft palate and if any observed difference, in EGM thresholds on the anterior tongue of smokers vs. non-smokers, can be attributed to a difference in the density or morphology of fungiform papillae at that site.

DOWNLOAD COMPLETE PDF HERE

Read More..

Homeopathic treatment of patients with chronic sinusitis: A prospective observational study with 8 years follow-up
Claudia M Witt, Rainer Lüdtke, and Stefan N Willich

Abstract
Background:
An evaluation of homeopathic treatment and the outcomes in patients suffering from sinusitis for 12 weeks in a usual care situation.

Methods: Subgroup analysis including all patients with chronic sinusitis (ICD-9: 473.9; 12 weeks duration) of a large prospective multicentre observational study population. Consecutive patients presenting for homeopathic treatment were followed-up for 2 years, and complaint severity, health-related quality of life (QoL), and medication use were regularly recorded. We also present here patient-reported health status 8 years post initial treatment.

Results: The study included 134 adults (mean age 39.8 ± 10.4 years, 76.1% women), treated by 62 physicians. Patients had suffered from chronic sinusitis for 10.7 ± 9.8 years. Almost all patients (97.0%) had previously been treated with conventional medicine. For sinusitis, effect size (effect divided by standard deviation at baseline) of complaint severity was 1.58 (95% CI 1.77; 1.40), 2.15 (2.38; 1.92), and 2.43 (2.68; 2.18) at 3, 12, and 24 months respectively. QoL improved accordingly, with SF-36 changes in physical component score 0.27 (0.15; 0.39), 0.35 (0.19; 0.52), 0.44 (0.23; 0.65) and mental component score 0.66 (0.49; 0.84), 0.71 (0.50; 0.92), 0.65 (0.39; 0.92), 0.74 (0.49; 1.00) at these points. The effects were still present after 8 years with SF-36 physical component score 0.38 (0.10; 0.65) and mental component score 0.74 (0.49; 1.00).

Conclusion: This observational study showed relevant improvements that persisted for 8 years in patients seeking homeopathic treatment because of sinusitis. The extent to which the observed effects are due to the life-style regulation and placebo or context effects associated with the treatment needs clarification in future explanatory studies.

Background
Chronic sinusitis is generally accepted to be a common illness incurring considerable costs, despite limited epidemiological data. It is defined as an inflammation of the nasal mucosa and paranasal sinuses for at least 12 weeks which may cause nasal blockage or congestion, mucous discharge, facial pain or pressure, and/or impaired smell. Polyps, which may or may not be present are increasingly recognized as part of the sinusitis pathology. Several factors have been found to contribute to the disease, namely, insufficient ciliary motility, allergy and asthma, bacterial infection, and more rarely, morphological anomalies, immune deficiencies and Samter's triad (salicylate sensitivity, asthma, nasal polyps). While the role of fungi and hormonal changes during pregnancy are unclear, it may also be an early symptom of systemic disease.

Standard treatment recommendations are to suppress the inflammatory process with corticosteroids, antibiotics may be also necessary to combat opportunistic infections, and possible underlying diseases may require their own specific medication. Saline douching can provide some symptomatic relief. Surgical intervention was found to be as effective as medical treatment, but should be reserved for refractory cases. Some complementary and alternative medical (CAM) treatments might be helpful as adjuvants. It appears that homeopaths are consulted more frequently by patients with acute and chronic sinusitis (13% of the homeopathy group vs. 7% of the conventional group in an observational comparison study), but to date no research has looked into the effects of homeopathy for chronic sinusitis.

Homeopathy is practised in many regions of the world, especially in high-income countries, where it is the most popular treatment form among the traditional, complementary, or alternative medical therapies. Homeopathic prescribing accounts for concomitant symptoms in addition to the predominant pathology, therefore the same main diagnosis may be treated with different remedies in different patients ('individualisation'). The prescribed drugs ('remedies') are under constant debate. They are produced by alternating steps of diluting and agitating a starting substance ('potentiating'). After several repetitions, dilutions beyond Avogadro's number are reached, and the probability approaches zero that even a single molecule of the starting substance
remains present in the drug. Such 'high potencies' are often used, however their effects are the subject of scientific controversy.

Apparently, the inconsistent results seen in meta-analyses of placebo-controlled trials pooling a great variety of diseases and ailments might be a consequence of trial selection. We analyzed the data from our prospective observational study, which globally evaluated details and effects under homeopathic treatment in a usual care situation (3981 patients over 8 years) with respect to diagnosis. This paper presents the 134 adults consulting a homeopathic physician because of chronic sinusitis.

DOWNLOAD COMPLETE PDF HERE

Read More..

Correlating the site of tympanic membrane perforation with Hearing loss
Titus S Ibekwe, Onyekwere G Nwaorgu2 and Taiwo G Ijaduola

Abstract
Background:
It is recognized that the size of tympanic membrane(TM) perforation is proportional to the magnitude of hearing loss, however, there is no clear consensus on the effect of the location (site) of the perforation on the hearing loss. Hence the study is set to investigate the relationship between the location of perforation on TM and hearing loss.

Methods: A cross-sectional prospective study of consecutive adult patients with perforated TM
conducted in the ENT clinic of University College Hospital Ibadan between January 1st 2005 and July 31st 2006. Instruments used for data collection/processing include questionnaires, video and micro-otoscopy, Pure tone audiometer, image J and SPSS packages.


Results: Sixty-two patients (22-males, 40-females), aged 1675 years (mean = 35.4 +/- 4) with 77 perforated ear drums were studied and 15(24.2%) had bilateral TM perforations, 21 (33.9%) right unilateral and 26(41.9%) left unilateral. The locations of the TM perforations were 60(77.9%) central, 6 (9.6%) antero-inferior, 4(5.2%) postero-inferior, 4(5.2%) antero-superior and 3(3.9%) postero-superior respectively with sizes ranging from 1.51%89.05%, and corresponding hearing levels 30 dB 80 dB (59% conductive and 41% mixed). Fifty-nine percent had pure conductive hearing loss and the rest mixed. Hearing losses (dBHL) increased with the size of perforations (P = 0.01, r = 0.05). Correlation of location of perforations with magnitude of hearing loss in acute TM perorations was (P = 0.244, r = 0.273) and for chronic perforations (p = 0.047 & r = 0.31).

Conclusion: The location of perforation on the tympanic membrane (TM) has no effect on the magnitude of hearing loss in acute TM perforations while it is significant in chronic ones.

Background
Apart from conduction of sound waves across the middle ear, the tympanic membrane, also sub-serves a protective function tothe middle ear cleft and round window niche. Intact tympanic membrane protects the middle ear cleft from infections and shields the round window from direct
sound waves which is referred to as 'round window baffle'. This shield is necessary to create a phase differential so that the sound wave does not impact on the oval and round windows simultaneously. This would dampen the flow of sound energy being transmitted in a unilateral direction from the oval window through the perilymph. It has been found that the effect of the enhanced ratio of the surface area of the tympanic membrane to that of the oval window increases the sound pressure by about 27 decibel (dB) whereas the lever action of ossicles contributes about 3 decibel (dB).

A perforation on the tympanic membrane reduces the surface area of the membrane available for sound pressure transmission and allows sound to pass directly into the middle ear. As a result, the pressure gradient between the 'inner' and 'outer' surfaces of the membrane virtually becomes insignificant. The effectiveness with which the tympanic membrane transmits motion to the ossicular chain is thus impaired along with the level of hearing. It has been established that the larger the perforation on the tympanic membrane, the greater the decibel loss in sound perception. A total absence of the tympanic membrane would lead to a loss in the transformer action of the middle ear. The location of the perforation is believed by some schools of thought to have a significant effect on the magnitude of hearing loss.
For instance, posterior quadrant perforations are believed to be worse than the anterior ones because of the direct exposure of the round window to sound waves and perforations at or near the site of tympanic membrane attachment to manubrium have more severe effects than those of comparable size at different sites. However, some workers believe that there is no significant effect associated with location of the perforation.This divergent opinion, informed undertaking the study, set to investigate the relationship between the location of perforation on TM and the magnitude of conductive hearing loss with a view to contributing to the body of knowledge on this subject.

DOWNLOAD COMPLETE PDF HERE

Read More..

Treatment of Acne Scarring
M. Alam, MD, MSCI; J. S. Dover, MD, FRCPC, FRCP

Abstract
Acne scarring is common but surprisingly difficult to treat. Scars can involve textural change in the superficial and deep dermis, and can also be associated with erythema, and less often, pigmentary change. In general, treatment of acne scarring is a multistep procedure. First, examination of the patient is necessary to classify the subtypes of scarring that are present. Then, the patient´s primary concerns are elicited, and the patient is offered a menu of procedures that may address the various components of the scarring process. It is important to emphasize to the patient that acne scarring can be improved but never entirely reversed.

Classification of Acne Scars
There are several classifications of acne scars. A recent, comprehensive and functional scheme was proposed, whereby scars are classified as rolling, ice-pick, shallow box-car, and deep box-car. Rolling scars are gently undulating, appearing like hills and valleys without sharp borders. Ice-pick scars, also known as pitted scars, appear as round, deep depressions culminating in a pinpoint base; in cross-section, they are shaped like a "v. Box-car scars have a flat, "u-shaped base. Broader than ice-pick scars, they are round, polygonal, or linear at the skin surface. Shallow box-car scars terminate in the shallow-to mid-dermis, and deep box-car scars penetrate to the reticular dermis.

Treatment Modalities for Textural Change
Among the therapeutic tools for treatment of acne scarring are resurfacing methods, fillers, and other dermal remodeling techniques. These methods can be adapted to treat specific scar types.

Resurfacing
Resurfacing options include:
  • Ablative resurfacing with carbon dioxide or erbium: yttrium aluminum garnet (Er:YAG) laser, medium- depth to deep chemical peel, dermabrasion, or plasma.
  • Nonablative and partially ablative resurfacing with fractional laser, infrared laser (1,320nm neodymium:YAG (Nd:YAG), 1,450nm diode, or 1,540nm erbium:Glass)

Ablative Resurfacing
Ablative resurfacing entails removal of the epidermis and partial thickness dermis, and is considered by most as the gold standard for pitted scars and some box-car scars. While ablative resurfacing is most effective if it is deep, thereby removing as much as possible of the depressed scar, it cannot be so deep as to destroy the base of the hair follicles; such destruction could impede skin regrowth, and induce scar formation at the treated site. Carbon dioxide resurfacing is the most effective but also most operator-dependent method for deep ablative resurfacing.

Dermabrasion is possibly even more effective, but this is another procedure that is very technique dependent. Deep phenol (Baker-Gordon) peels, also highly effective, have fallen out of favor because of the associated cardiac risk and the frequency of porcelain-white postinflammatory hypopigmentation. Definitive ablative resurfacing results in 2 weeks of patient downtime, during which period re-epithelialization occurs. More superficial resurfacing with the Er:YAG laser or plasma can provide recovery within 1 week, but deeper acne scars may be less improved.

Nonablative Resurfacing
Nonablative resurfacing with laser and lights warms the dermis and can provide modest improvement of acne scarring bystimulating collagen remodeling. All subtypes of acne scars can be improved by nonablative therapy. Among the lasers used for this indication are devices originally developed for otheruses, such as pulsed-dye lasers, intense pulsed light devices, and Q-switched Nd:YAG lasers. However, more recently nonablative devices have been optimized to specifically target textural irregularities. For example, a series of treatments with infrared lasers can significantly improve uneven contour associated with acne scarring. These treatments are typically uncomfortable and may require oral and/or topical analgesics.

Similarly, fractional resurfacing is quite effective in the treatment of acne scarring. Fractional resurfacing is a minimally ablative technique that creates microscopic zones of dermal injury in a grid-like pattern. Because only a small proportion of the skin surface is treated at one time, and since the stratum corneum is not perforated, recovery is quick. However, a series of treatments is needed.

DOWNLOAD COMPLETE PDF HERE

Read More..

Topical Treatments for Melasma and Postinflammatory Hyperpigmentation
C.B. Lynde; J.N. Kraft, MD; C.W. Lynde, MD, FRCPC

Abstract
Hyperpigmentation disorders of the skin are common and can be the source of significant psychosocial distress for patients. The most common of these disorders are melasma and postinflammatory hyperpigmentation. Sunscreen use and minimizing sun exposure are crucial in all cases. Topical applications are the mainstay of treatment and include phenols, retinoids, corticosteroids, and their combinations.

Introduction
Hyperpigmentation of the skin is a very common problem, with many patients seeking therapies to improve their cosmetic appearance. It is the result of an increase in cutaneous melanin deposition either by increased melanin synthesis or, less commonly, by a greater number of melanocytes. The amount of color change depends on the location of the melanin deposition. Epidermal involvement appears as brown discoloration whereas dermal deposition appears as blue-grey.

Mixed epidermal and dermal depositions appear as brown-grey discolorations. The use of a Wood's lamp can often be very beneficial in determining the location of melanin deposition showing enhancement of color contrast in lesional skin for the epidermal type, but not the dermal types. The mixed type has enhancement in some areas of lesional skin, but not in other areas. Whether the melanin is deposited in the epidermis or dermis is important therapeutically because dermal hyperpigmentation is much more challenging to treat.

The most common pigmentation disorders for which patients seek treatment are melasma and postinflammatory hyperpigmentation (PIH). These conditions may have a major impact because disfiguring facial lesions can significantly affect a person's psychological and social wellbeing, contributing to lower productivity, social functioning, and self-esteem.

Melasma
Melasma is a common acquired pigmentary disorder that occurs mainly in women (more than 90% of cases) of all racial and ethnic groups, but particularly affects those with Fitzpatrick skin types IV-VI. While the cause of melasma is unknown, factors include: a genetic predisposition, ultraviolet light exposure, and estrogen exposure. Estrogen is thought to induce melasma as it often develops during pregnancy, with use of oral contraceptives, and with hormone replacement therapy (HRT) in postmenopausal women. Melasma in pregnancy usually clears within a few months of delivery.

Discontinuation of oral contraceptives or HRT, in combination with adequate sun protection, may also result in melasma clearance, although there is a paucity of literature with regard to HRT and the clearance of this condition. Melasma presents as brown to grey macules and patches, with serrated, irregular, and geographic borders. The pigmented patches are usually sharply demarcated and symmetrical. Melasma has a predilection for sun-exposed areas. The three major patterns of distribution are: centrofacial (cheeks, forehead, upper lip, nose, and chin) (66% of cases), malar (cheeks and nose) (20% of cases) and mandibular (rami of the mandible) (15% of cases). See Table 1 for the differential diagnosis.

DOWNLOAD COMPLETE PDF HERE

Read More..

Practical Management Strategies for Diaper Dermatitis
S. Humphrey, MD; J. N. Bergman, MD, FRCPC; S. Au, MD, FRCPC

Abstract
Common diaper dermatitis is an irritant contact diaper dermatitis(IDD) created by the combined influence of moisture,warmth, urine, feces, friction, and secondary infection. It is difficult to completely eradicate these predisposing factors in a diapered child. Thus, IDD presents an ongoing therapeutic challenge for parents, family physicians, pediatricians, and dermatologists. This article will focus on pratical management strategies for IDD.

Introduction
IDD is a common inflammatory eruption of the skin in the diaper area created by the presence of moisture, warmth, urine, feces, and friction, and is seen in 25% of children wearing diapers.

Pathogenesis
Four key factors contribute to the development of IDD :

  • Wetness : Wet diapers result in excessive hydration and maceration of the stratum corneum leading to impaired barrier function, enhanced epidermal penetration by irritants and microbes, and susceptibility to frictional trauma.
  • Friction : IDD is most commonly distributed in areas with the greatest skin-to-diaper contact. Mechanical trauma disrupts the macerated stratum corneum, exacerbating barrier dysfunction.
  • Urine and feces : The interaction of urine and feces is key to the pathogenesis of IDD. Bacterial ureases in the stool degrade the urea that is found in urine, releasing ammonia and increasing local pH. Fecal lipases and proteases are activated by the increased pH. They cause skin irritation and disruption of the epidermal barrier. Ammonia does not irritate intact skin; it is thought to mediate irritation by contributing to the high local pH.
  • Microorganisms : candida albicans (C. albicans) and, less commonly, Staphylococcus aureus (S. aureus) infections are associated with IDD. The warm, humid, and high pH environment in the diaper provides the ideal milieu for microbial proliferation. Innate antimicrobial microflora cannot survive in a high pH environment. There is a positive correlation between the clinical severity of IDD and the presence and level of C. albicans in the diaper, mouth, and anus of infants.


Clinical Features
IDD initially presents with localized asymptomatic erythema, and can progress to widespread painful, confluent erythema with maceration, erosions, and frank ulceration. IDD commonly spares the skin folds, and affects the convex skin surfaces in close contact with the diaper including the buttocks, genitalia, lower abdomen, and upper thighs. IDD complicated by Candida presents with beefy red intertriginous plaques and satellite papules and pustules in the diaper area.IDD complicated by S. aureus appears impetiginized, with erosions, honey-colored crust, and lymphadenopathy.

Granuloma gluteale infantum and Jacquet erosive diaper dermatitis are distinctive, severe variants of IDD. Granuloma gluteale infantum presents in the setting of IDD with violaceous papules and nodules on the buttocks and in the groin.The pathogenesis of granuloma gluteale infantum is not clear. Potential risk factors include treatment with topical steroids, candida infection, and occlusive plastic diaper covers. Granuloma gluteale infantum follows a self-limited course, resolving in weeks to months, often with residual scarring. The presence of punched-out erosions or ulcerations with heaped-up borders characterizes Jacquet erosive diaper dermatitis. This uncommon and severe presentation of IDD typically occurs in the context of frequent liquid stools, poor hygiene, infrequent diaper changes, or occlusive plastic diapers.

It is imperative to consider other conditions that may occur in the diaper area. Several excellent references are available that outline the differential diagnosis of IDD. Please see Table 1 for a review of the clinical features of relevant diaper dermatoses.

DOWNLOAD COMPLETE PDF HERE

Read More..