Part 2: evidence evaluation and management of conflicts of interest: 2015 international consensus on cardiopulmonary resuscitation and emergency cardiovascular care science with treatment recommendations
Contents lists available at
Part 2: Evidence evaluation and management of conflicts of interest
2015 International Consensus on Cardiopulmonary Resuscitation and
Emergency Cardiovascular Care Science with Treatment
Peter T. Morley , Eddy Lang, Richard Aickin, John E. Billi, Brian Eigel, Jose Maria Ferrer,
Judith C. Finn, Lana M. Gent, Russell E. Griffin, Mary Fran Hazinski, Ian K. Maconochie,
William H. Montgomery, Laurie J. Morrison, Vinay M. Nadkarni, Nikolaos I. Nikolaou,
Jerry P. Nolan, Gavin D. Perkins, Michael R. Sayre, Andrew H. Travers, Jonathan Wyllie,
Keywords:Cardiac arrest
Conflict of interest
Evidence evaluation
Facts are stubborn things; and whatever may be our wishes, our
completed 165 systematic reviews on resuscitation related ques-
inclinations, or the dictates of our passions, they cannot alter
the state of facts and evidence.—John Adams, second President
of the United States
Process before 2015
The processes previously used by ILCOR in the development
of their CoSTR were specifically tailored to the complex needs
The international resuscitation community, under the guidance
of resuscitation science. At the time that the evidence evalua-
of the International Liaison Committee on Resuscitation (ILCOR),
tion was undertaken for the 2010 publication, there were still no
has continued its process to identify and summarize the published
other processes which could deal with the complexity of litera-
resuscitation science in the documents known as the ILCOR Con-
ture that we need to evaluate: from randomized controlled trials
sensus on Science with Treatment Recommendations (CoSTR). The
to case series, and from mathematical models to animal studies.
accompanying articles represent the culmination of many years
The 2010 evidence evaluation process required completion of an
work, where a total of 250 evidence reviewers from 39 countries
electronic included a table, summarizing the evi-
dence addressing individual questions. It included 3 options for the
direction of support (supportive, neutral and opposing), 5 Levels of
Evidence, and a quality assessment of the individual studies (good,
The European Resuscitation Council requests that this document be cited as fol-
fair or poor).
lows: Peter T. Morley, Eddy Lang, Richard Aickin, John E. Billi, Brian Eigel, Jose Maria
E. Ferrer, Judith C. Finn, Lana M. Gent, Russell E. Griffin, Mary Fran Hazinski, Ian K.
Maconochie, William H. Montgomery, Laurie J. Morrison, Vinay M. Nadkarni, Niko-
laos I. Nikolaou, Jerry P. Nolan, Gavin D. Perkins, Michael R. Sayre, Andrew H. Travers,
Improvements for the 2015 process
Jonathan Wyllie, David A. Zideman. Part 2: Evidence evaluation and management of
conflicts of interest. 2015 International Consensus on Cardiopulmonary Resuscita-
When developing the process to be adopted for the 2015 CoSTR,
tion and Emergency Cardiovascular Care Science with Treatment Recommendations.
ILCOR made a commitment to use the best available methodolog-
ical tools to conduct its evaluation of the published resuscitation
This article has been copublished in "Circulation".
∗ Corresponding author.
literature. To this end, ILCOR agreed to perform systematic reviews
E-mail address: (P.T. Morley).
based on the recommendations of the Institute of Medicine of the
0300-9572/ 2015 European Resuscitation Council, American Heart Association, Inc., and International Liaison Committee on Resuscitation. Published by Elsevier Ireland
Ltd. All rights reserved.
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
National to use the methodological approach
proposed by the Grading of Recommendations, Assessment,
Summary outline of the evidence evaluation process for the ILCOR 2015 CoSTR.
Development and Evaluation (GRADE) Working
• Task forces select, prioritize, and refine questions (using PICO format)
In addition, ILCOR leveraged technologic innovations, with the
• Task forces allocate level of importance to individual outcomes.
support of science and technology specialists at the American Heart
• Task forces allocate PICO question to task force question owner and 2
evidence reviewers
Association, to build a Web-based information system that would
• Task force works with information specialists to develop and fine-tune
support the creation of scientific statements and recommendations
search strategies (for PubMed, Embase, and Cochrane)
that adhere to the GRADE methodology. An online platform known
• Public invited to comment on PICO question wording, as well as the
as the Scientific Evaluation and Evidence Review System (SEERS:
proposed search strategies
• Revised search strategies used to search databases (PubMed, Embase, and
www.ilcor.org/seers) was developed to guide the taskforces and
their individual evidence reviewers, and enabled those responsi-
The articles identified by the search are screened by the evidence reviewers
ble for tasks to better monitor progress in real time and receive
using inclusion and exclusion criteria
assignments as indicated by the progression in work flow. One key
• Evidence reviewers agree on final list of studies to include
feature of the SEERS system is the ability to open all components
• Evidence reviewers agree on assessment of bias for individual studies
• GRADE evidence profile table created
of the process to the public for comments and suggestions. SEERS
• Draft consensus on science statements and treatment recommendations
functions as the repository of all the information and reviews pro-
cessed since 2012 by the task forces, and Evidence Reviewers and
• Public invited to comment on draft consensus on science and treatment
discussions at the C2015 Conference. It remains the home for the
• Detailed iterative review of consensus on science and treatment
15 GRADE tutorials and 13 GRADE "ask the expert" seminars, as
recommendations to create final version
well as housing the training videos produced by AHA staff.
• Peer review of final CoSTR document
CoSTR indicates Consensus on Science with Treatment Recommendations; GRADE,
The GRADE process
Grading of Recommendations, Assessment, Development, and Evaluation; ILCOR,
International Liaison Committee on Resuscitation; and PICO, Population, Interven-
Why introduce the GRADE process?
tion, Comparator, Outcome.
The methodological approach proposed by the GRADE Work-
Task forces, task force question owners, evidence reviewers,
ing Group has been developed over the past decade by key health
evidence evaluation specialist/GRADE/methodology experts
professionals, researchers, and guideline developers in an attempt
to provide a consistent and transparent process for use in guide-
Seven task forces evaluated the resuscitation literature: Acute
line provides guidance for the rating of quality
Coronary Syndromes; Advanced Life Support; Basic Life Support;
of evidence and the grading of strength of recommendations in
Education, Implementation, and Teams; First Aid; Neonatal Resus-
health care. It is now widely used in the guideline development
citation; and Pediatric Life Support. Each task force appoints Task
processes throughout the world including by organizations such
Force Question Owners and Evidence Reviewers to oversee the evi-
as the Cochrane Collaboration, the World Health Organization, the
dence evaluation process for each question. The task forces were
National Institute for Health and Care Excellence (NICE), the Scot-
supported by online resourceswell as telephone, face-to-
tish Intercollegiate Guidelines Network (SIGN), and the American
face, and Web-based educational sessions provided by a GRADE
Thoracic GRADE approach has been refined to the
methodologist and an evidence evaluation expert, with advice from
point that it is now able to incorporate the variety of studies that
a specifically formed ILCOR Methods Group.
make up the body of resuscitation science.
Components of the 2015 ILCOR systematic reviews
What is different about the GRADE process?
The evidence evaluation follows a standard format. The key
The GRADE process outlines a systematic and explicit consider-
components of this format are described in detail below.
ation of study design, study quality, consistency, and directness of
evidence to be used in judgments about the quality of evidence
Agree on PICO-formatted question and prioritizing outcomes
for each outcome of each specific question. The GRADE process
Each task force identified the potential questions to be
is, therefore, much more outcome-centric than our previous pro-
addressed on the basis of known knowledge gaps, priorities as part
cesses. GRADE considers evidence as a function of the totality of
of previous recommendations, current issues raised by individual
data that informs a prioritized outcome across studies, as opposed
resuscitation councils, the known published literature, and areas of
to information evaluated at the level of the individual study. The
controversy. The task forces were then required to prioritize these
GRADE approach facilitates appropriate consideration of each out-
questions for formal review, and to develop agreed-upon wording
come when grading overall quality of evidence and strength of
by using the PICO (population, intervention, comparator, outcome)
recommendations, and it reduces the likelihood of mislabeling the
overall quality of evidence when evidence for a critical outcome is
As part of the PICO question development, the GRADE process
required designation of up to 7 key outcomes for each PICO ques-
tion. The task force then allocated a score for each outcome on a
The 2015 ILCOR evidence evaluation process
scale from 1 to outcomes were scored 7 to 9, impor-
tant outcomes were scored 4 to 6, and those of limited importance
The 2015 ILCOR evidence evaluation followed a complex but
were scored 1 to 3. The types of outcomes used (and their possible
systematic process. In general, the steps followed are consistent
relevant importance score) included neurologically intact survival
with those outlined by the Institute of Medicine.the
(e.g., critical 9), discharge from hospital alive (eg, critical 8), and
development of this process, a transition was made to a more
return of spontaneous circulation (e.g., important 6).
complete online process, using a combination of existing and
The explicit preference of this process was that if evidence was
newly developed tools. The steps in the evidence review process
lacking for a key outcome, this was acknowledged rather than
excluding that outcome.
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
Develop search strategy
Detailed strategies to search the published literature were
Bias assessment tools.
developed in conjunction with information specialists. Initial draft
Randomized controlled
search strategies were developed for each of 3 databases: PubMed
(National Library of Medicine, Washington, DC), Embase (Elsevier
• Was the method used to generate the allocation
B.V., Amsterdam, The Netherlands), and the Cochrane Library (The
sequence described in sufficient detail to allow an
Cochrane Collaboration, Oxford, England). These strategies were
assessment of whether it should produce
comparable groups?
developed to optimize the sensitivity and specificity of the search
• Was the method used to conceal the allocation
and then refined on the basis of feedback from the resuscitation
sequence described in sufficient detail to
community and public comment. The articles identified by the final
determine whether intervention allocations could
search strategies were combined into a single database for more
have been foreseen in advance of, or during,
detailed analysis by the evidence reviewers.
• Were measures used to blind study participants
and personnel from knowledge of which
intervention a participant received?
Identify articles for inclusion and exclusion
• Was the intended blinding effective?
Each evidence reviewer used the SEERS online process to screen
• Were measures used to blind outcome assessors
the identified articles for further review. The initial screening, based
from knowledge of which intervention a
on formal inclusion and exclusion criteria, was performed by using
participant received?
• Was the intended blinding effective?
each article's title and abstract, and then a review of the full text of
• Were the outcome data complete for each main
the article was performed if needed. Specific inclusion and exclu-
outcome, including attrition and exclusions from
sion criteria varied according to the individual PICO questions, but
generic criteria included such items as a requirement for the study
• Did the study report appropriate outcomes (ie, to
to be published in the peer-reviewed literature (not just in abstract
avoid selective outcome reporting)?
• Was the study otherwise free of important
form) and to specifically address the individual components of the
sources of bias not already reported previously?
PICO question. The evidence reviewers were also asked to check
Observational studies
for studies that may have been missed in the initial search, by
• Were appropriate eligibility criteria developed
reviewing the references of the identified studies, and perform-
and applied to both the cohort of interest and the
comparison cohort?
ing a forward search on key studies (e.g., by the use of "cited by" in
• Was confounding adequately controlled for?
• Was measurement of exposure and outcome
appropriate and consistently applied to both the
cohort of interest and the comparison cohort?
Bias assessment of individual studies
• Was follow-up complete?
The Cochrane Collaboration's tool was used for assessing the risk
of bias for randomised controlled trials.GRADE tool was used
to assess the risk of bias of observational studies (for both therapy
The completion of these evidence profile tables was facilitated
and prognosis questions) (
by online access to the Guideline Development Tool
The Quality Assessment of Diagnostic Accuracy Studies
(QUADAS)-2 tool was used for assessing risk of bias in studies
of diagnostic there were significant differences in
GRADE evidence profile tables: Study design. The methodological
the risks of bias for different outcomes, evidence reviewers were
type of study is used by the GRADE process as the starting point
instructed to create a separate row in the table for each outcome.
for the estimate of overall risk of bias. The rating for each type of
Individual studies can be allocated an overall "low" risk of bias if
study varies according to type of question being asked.
most or all key criteria listed above are met, and any violations are
For PICO questions related to therapeutic interventions, evi-
not crucial. Individual studies that have a crucial limitation in 1 cri-
dence supported by RCTs starts as high-quality evidence (⊕⊕⊕⊕).
terion or some limitations in multiple criteria, sufficient to lower
Evidence supported by observational studies starts as low-quality
the confidence in the estimate of effect, are considered at "moder-
evidence (⊕⊕).PICO questions related to diagnostic accuracy,
ate" risk of bias. Individual studies that have a crucial limitation in
evidence supported by valid diagnostic accuracy studies (cross-
1 or more criteria, sufficient to substantially lower the confidence
sectional or cohort studies, in patients with diagnostic uncertainty
in the estimate of effect, are considered at "high" risk of bias.
and direct comparison with an appropriate reference standard)
The two (or more) individual evidence reviewers for each ques-
starts as high-quality evidence (⊕⊕⊕⊕The overwhelming
tion created a reconciled (agreed) risk of bias assessment for each
majority of outcomes for the PICO questions were associated with
of the included studies, which was recorded by using an electronic
very low quality of evidence (⊕).
GRADE evidence profile tables: Core domains.
GRADE evidence profile tables
Risk of bias.
The overall risk of bias for each study relevant to
The GRADE working group has developed validated evidence
each key outcome was allocated in the bias assessment in individual
tables known as evidence profile tables. These tables incorporate
studies process. In the evidence profile table, a summary assess-
information on the quality of evidence for each outcome-dedicated
ment is required across the included studies for each outcome. The
row and provide information on effect size and precision, and they
3 possible categories are as follows:
can provide information about varying effects across a variety of
baseline risks.evaluation of the evidence supporting each
• No serious limitations: most information is from studies at low
outcome incorporates the information from study design and the
risk of bias.
five core GRADE domains: risk of bias, imprecision, indirectness,
• Serious limitations: most information is from studies at moderate
inconsistency, and other considerations (e.g., publication
risk of bias.
An overall assessment is then made of the quality of evidence to
• Very serious limitations: most information is from studies at high
support each outcome (high, moderate, low, or very low).
risk of bias.
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
Fig. 1. Example of bias assessment tables (RCTs and non-RCTs).
Evidence across studies may be ranked down for risk of bias
• Very serious inconsistency.
by either one level, for serious limitations, or two levels, for very
Evidence across studies may be ranked down for inconsistency
Inconsistency is a concept that considers the
(by either 1 [for serious limitations] or 2 levels [for very serious
extent to which the findings of studies that look at the same out-
comes agree with each other in a consistent way. Variability in the
Indirectness of evidence.
The GRADE process describes direct
magnitude of effect may be because of differences in PICO or other
evidence as "research that directly compares the interventions in
differences in study design. Reviewers were asked to document
which we are interested, delivered to the populations in which
limitations when (1) point estimates varied widely across studies,
we are interested, and measures the outcomes important to
(2) confidence intervals (CIs) showed minimal or no overlap (ie,
patients.".about directness therefore arise when there
studies appear to have different effects), or (3) statistical tests of
are differences in the Population (e.g., patients in cardiac arrest
heterogeneity were suggestive of reviewers
versus not in cardiac arrest), Intervention (e.g., different techniques
were asked to assess the studies that report that outcome as having:
to induce therapeutic hypothermia), Comparison (e.g., conven-
tional CPR using 2010 guidelines versus conventional CPR using
• No serious inconsistency.
2000 guidelines), or outcomes (e.g., return of spontaneous circula-
• Serious inconsistency.
tion versus termination of ventricular fibrillation for 5 s), or where
Fig. 2. Example of GRADE evidence profile table completed by using the guideline development tool.
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
there are no head-to-head comparisons between interventions.
registers, abstracts, theses). Discrepancies between meta-analyses
Important differences in outcome measures include time frame
of small studies and subsequent large RCTs occur in approximately
(e.g., hospital discharge vs 6-month survival) or other surrogate
20% of cases, in part due to publication bias.
outcomes (e.g., hospital admission vs neurologically intact sur-
Reviewers should allocate strongly suspected (bias) when the
vival). Usually data that rely on surrogate outcomes would result
evidence consists of a number of small studies, especially if these
in an allocation of serious or very serious limitations.
are industry sponsored or if the investigators share another conflict
Limitations in more than one type of directness may suggest a
of risk of publication bias in observational studies is
need to rate the studies as having very serious limitations.
probably larger than in RCTs (particularly small studies, data col-
In general, allocating limitations as serious or very serious
lected automatically, or data collected for a previous study). The use
should be considered only where there is a compelling reason
of graphical or statistical testing for publication bias may be useful
to think that the biology in the population of interest is so dif-
but has limitations, and is not routinely recommended. Additional
ferent that the magnitude of effect will differ substantially (eg.
information about unpublished trials can be found in databases
cardiac arrest victim vs stroke victim). Evidence from animal stud-
such as . GRADE suggests that the rating for
ies, manikins or other models would generally be rated as having
publication bias across studies should be allocated:
very serious limitations (but this would be dependent on the key
outcomes listed).
• undetected, or
Again reviewers are asked to assess the studies that report that
• strongly suspected.
If publication bias is strongly suspected the quality of evidence
• No serious indirectness.
is rated down by one level.
• Serious indirectness.
• Very serious indirectness.
Rating up the quality of observational studies. The GRADE group
recommends that methodologically rigorous observational studies
Any of these concerns may result in a rating down of the quality
may have their quality rated up where there is a large magnitude of
of evidence for directness (by either one [serious limitations] or
effect, where there is a dose–response gradient, or when all plau-
two levels [very serious limitations]).
sible confounders or biases would reduce the demonstrated effect.
The assessment of precision and imprecision is
Obviously consideration for rating down the quality of evidence
complex. The CI around a result enable us to assess the range in
(risk of bias, imprecision, inconsistency, indirectness, and publica-
which the true effect lies. If the CIs were not sufficiently narrow
tion bias) must precede considerations for rating up the quality.
(such as overlap with a clinical decision threshold, eg. a 1% abso-
Only a very small number of the systematic reviews identified evi-
lute difference in survival to hospital discharge), the quality would
dence that met these criteria.
be rated as having serious limitations (or as very serious limita-
Magnitude of effect.
A large magnitude effect would be consid-
tions if the CI is very wide). Another way of describing this is where
ered justification to increase the rating by 1 level (eg, from low to
the recommendation would be altered if the upper boundary of
moderate) if the RR was 2 to 5, or 0.2 to 0.5 with no plausible con-
the CI or the lower boundary of the CI represented the true effect.
founders. The reviewer would be more likely to rate up if the above
Factors that may further influence this decision include the impor-
size of effects occurred rapidly and out of keeping with prior gradi-
tance of the outcome, the adverse effects, the burden to the patient,
ent of change; in these situations, they would usually be supported
the resources required, and the difficulty of introducing a tech-
by indirect or lower levels of evidence. If above criteria are all met,
nique into practice.the total number of patients included in
and the RR is very large (e.g., greater than 5–10) or very low (RR
the evidence for each outcome being evaluated does not exceed
less than 0.2), rating up by 2 levels (from low to high) could be
the number of patients generated by a conventional sample size cal-
culation for a single adequately powered trial, evidence reviewers
A dose–response gradient, such as
were advised to consider rating down for imprecision. This "optimal
increased effect with an increased dose, or decreased time to
information size" can be estimated using calculators and tables.
intervention, or increased intensity or duration of an educational
Even if the optimal information size is met, and the CI overlaps no
intervention, increases the confidence in the findings of observa-
effect (ie, CI includes relative risk [RR] of 1.0) evidence reviewers
tional studies. In this setting, rating up the quality of evidence by 1
were instructed to rate down the quality of the evidence for impre-
level could be considered.
cision if the CI fails to exclude important benefit or important harm
Issues around confounding.
If all plausible prognostic factors
(e.g., a 25% increase in
are accurately measured in observational studies, and if all the
Reviewers were asked to assess the studies that reported that
observed residual confounders and biases would diminish the
observed effect, then the effect estimate would be strengthened.
In this setting, rating up the quality of evidence by 1 level could be
• No serious imprecision.
• Serious imprecision.
• Very serious imprecision.
GRADE evidence profile tables: Estimate of effect. We asked evidence
reviewers to complete the effect size column for each row in the evi-
If problems with precision were detected, the quality of evi-
dence profile tables with an estimate for both relative and absolute
dence for precision was rated down (by either one [for serious
effects. For example, binary outcomes required RR (or odds ratio),
limitations] or two levels [for very serious limitations]).
of the intervention compared to control, with 95% CIs and absolute
Publication bias.
Unidentified studies may yield systematically
effect of intervention - control as absolute percentage, with 95%
different estimates of beneficial effects of an intervention. Studies
CIs. It is the absolute differences that allow accurate assessment of
with positive results are much more likely to be published (odds
ratio, 3.9; 95% CI, conclusions can result from
There was significant discussion about the exact principles to
early review (missing studies with delayed publication [even more
be employed to determine whether a meta-analysis of data should
likely with negative studies]), restricting the search to English lan-
be performed. There are statistical concerns about the simple
guage journals, or not including grey literature (e.g., clinical trial
combining of results from trials,there are also significant
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
concerns about performing a meta-analysis when it would not be
effect estimates (very limited data), because they felt that the bal-
ance between desirable and undesirable consequences was so close
If several RCTs or observational studies were identified that pub-
they could not make a recommendation (data exists, but no clear
lished results for outcomes considered critical or important, and
benefits), or because the two management options had very dif-
these studies were closely matched to the PICO question, the evi-
ferent undesirable consequences (and local values and preferences
dence reviewers were encouraged to complete an Assessing the
would decide which direction to take).
Methodological Quality of Systematic Reviews (AMSTAR) check-
list to ensure that the appropriate principles for performance of
Values and preferences and task force insights
the meta-analysis were scenarios where it was
The task forces were encouraged to provide a values and pref-
thought that the data should not be combined into a meta-analysis,
erences statement whenever a treatment recommendation was
the authors were instructed to list the outcomes for each study, or,
made. This is an overarching term that includes perspectives,
if a simple mathematical combination of data was performed, this
beliefs, expectations, and goals for health and life as well as the
would be accompanied by a statement suggesting that the data
processes used in considering the potential benefits, harms, costs,
were simply pooled (combined without being weighted).
limitations, and inconvenience of the management options in rela-
tion to one forces were encouraged to provide
Guideline development tool
additional explanatory comments whenever possible to help read-
The GRADE process takes a very comprehensive approach to the
ers gain more insight into the perspectives of the discussion.
determination of the direction and strength of any recommenda-
tions. During the conduct of the systematic reviews, an updated
Developing consensus
online tool developed by the GRADE Working Group became avail-
able for use. An online Guideline Development Tool
Each task force used regular audio conferencing and webinars,
by the GRADE Working Group was used to help assess the over-
where the systematic reviews were electronically presented for dis-
all balance between benefits and risks or harms for each option,
cussion and feedback. Additional face-to-face meetings were held
including consideration of dimensions such as patient values and
at least once each year to provide opportunities to learn about
preferences and resource ILCOR task forces
the process and to facilitate collaboration between the seven task
were encouraged to use this tool to assist in their deliberations.
forces. Consensus was obtained through detailed discussion and
feedback provided by the ILCOR task force members, the GRADE and
Creation of consensus on science statements
evidence evaluation experts, the ILCOR methods group, the public,
The completed evidence profile tables were then used to create
and the individual international resuscitation councils.
a written summary of evidence for each outcome: the consensus
on science statements. The structure of the new 2015 consensus
Public consultation
on science statement was developed as a means of providing an
explicit narrative to communicate the evidence synthesis and qual-
To ensure as much broad input as possible during the evidence
ity judgments found in the evidence profiles. These statements are
evaluation process, public comment was sought at two main points.
supported by a categorization of the overall quality of the evidence
Initial feedback was sought about the specific wording of the PICO
(high, moderate, low, or very low) and include reasons for their
questions and the initial search strategies. Subsequent feedback
downgrading or upgrading. The recommended standard consensus
was sought after creation of the initial draft consensus on science
on science format was as follows:
statements and treatment recommendations.total of 492 com-
ments were received. At each of these points in the process, the
For the important outcome of Z (e.g., return of spontaneous
public comments were made available to the evidence reviewers
circulation), we have identified very-low-quality evidence
and task forces for their consideration.
(downgraded for risk of bias and imprecision) from 2 observa-
tional studies (#1, #2) enrolling 421 patients showing no benefit
(RR, 0.81; 95% CI, 0.33–2.01).
Lower levels of evidence
Creation of agreed treatment recommendations
Consensus-based treatment recommendations were then cre-
In many resuscitation scenarios, there are no RCTs or even good
ated whenever possible. These recommendations were accompa-
observational studies, so there is a need to explore other population
nied by an overall assessment of the evidence and a statement from
groups. The GRADE process is very explicit about the allocation of
the task force about the values and preferences that underlie the
quality of evidence to support the individual outcomes. Extrapola-
recommendations. These are supported by a categorization of the
tion of data from other patient groups (e.g., adult versus pediatric,
overall quality of the evidence (high, moderate, low, or very low)
cardiac arrest versus shock), mathematical models, and animal
and strength of recommendation (strong or weak).
studies means that this evidence, irrespective of methodological
The recommended standard treatment recommendation format
quality, would be downgraded for at least serious indirectness. This
usually resulted in a very low quality of evidence, and many task
forces found this initially challenging.
We suggest/recommend for/against X in comparison with Y for
out-of-hospital cardiac arrest (weak/strong recommendation,
Diagnostic and prognostic questions
very low/low/moderate/high quality of evidence).
The GRADE process encourages organizations to commit to
The GRADE process has been developed specifically to deal with
making a recommendation by using "we recommend" for strong
questions that address alternative management strategies. It has
recommendations and "we suggest" for weak recommendations in
been modified to enable consideration of questions that relate to
either a positive or negative direction (ie, "suggest/recommend,"
it was not developed to address questions about
"for/against"). In the unusual circumstances in which task forces
risk or prognosis.
chose not to make recommendations, they were encouraged to
A few diagnostic questions were addressed in the 2015 pro-
specify whether this was because they had very low confidence in
cess, and ideally the best diagnostic questions relate their outcomes
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
to when a particular diagnostic strategy is used or not used (i.e.,
conflicts in assigning task force question owners and evidence
actually an intervention question).
reviewer roles for individual PICO questions. Individuals were reas-
The first of a series of GRADE articles about studies addressing
signed when potential conflicts surfaced. Participants, co-chairs,
prognosis has been published only unfortunately,
and staff raised COI questions and issues throughout the pro-
these details were not available to the evidence reviewers for this
cess and referred them to the COI co-chairs if they could not be
process. A couple of approaches to prognosis were used, including
resolved within their group. The COI co-chairs kept a complete log
the use of existing observational study bias assessment tools or a
of all COI-related issues and their resolutions. None of the issues
modification of these.
required serious intervention, such as replacement of any leader
roles. As a result of commercial relationships, however, several
PICO questions were reassigned to evidence reviewers or ques-
tion owners without potential conflicts. As in 2010, the phone
There were several situations when task forces were keen to
number for the COI hotline was broadly disseminated throughout
use a strong recommendation when the quality of evidence did not
the 2015 Consensus Conference for anonymous reporting; no calls
support this. This is not unexpected given the few published RCTs
were received.
and good observational studies available in the resuscitation liter-
As in 2010, the dual-screen projection method was used for all
ature. Task forces were made aware of the importance of clarifying
sessions at the 2015 Consensus Conference. One screen displayed
their rationale when they wished to make such discordant recom-
the presenter's COI disclosures continuously throughout his or her
mendations. They were encouraged to use standardized wording
presentation. The same was true for all questions or comments from
(e.g., "Intervention may reduce mortality in a life-threatening situ-
participants or task force members: whenever they spoke, their
ation, and adverse events not prohibitive" or "A very high value is
relationships were displayed on one screen, so that all participants
placed on an uncertain but potentially life-preserving benefit").
could see potential conflicts in real time, while slides were pro-
In keeping with this approach, the number of discordant recom-
jected on the second screen. Individuals also abstained from voting
mendations in ILCOR was limited in the 2015 process, as were the
on any issue for which they had a conflict. Such abstentions, along
number of strong recommendations.
with any other issues that arose, were recorded on a COI attesta-
tion completed by the COI monitor for each session. As in 2010, the
Management of conflicts of interest throughout the CoSTR
COI system ran smoothly and did not impede the progress of the
To ensure the integrity of the evidence evaluation and consen-
sus on science development process, ILCOR followed its rigorous
conflict of interest (COI) management policies at all times. A full
The process for evaluating the resuscitation science has evolved
description of these policies and their implementation can be found
considerably over the past 2 decades. The current process, which
in Part 4 of the 2010 CoSTR.persons involved in any part of
incorporates the use of the GRADE methodology, culminated in the
the process disclosed all commercial relationships and other poten-
2015 CoSTR publication, which in turn will inform the international
tial conflicts, and in total, the AHA processed more than 1000 COI
resuscitation councils' guideline development processes. Over the
declarations. These disclosures were taken into account in assign-
next few years, the process will continue to evolve as ILCOR moves
ment of task force co-chairs and members, writing group co-chairs,
toward a more continuous evaluation of the resuscitation science.
and other leadership roles. Relationships were also screened for
2015 CoSTR Part 2: Evidence evaluation: writing group disclo-
bureau/honoraria witness
memberPeter T. Morley
Starship Children's
The University of
Curtin University
St. Mary's Hospital
MaconochieLaurie J.
Children's Hospital
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
bureau/honoraria witness
School and Heart of
University Hospital
Jose Maria Ferrer
Russell E. Griffin
This table represents the relationships of writing group members that may be perceived as actual or reasonably perceived conflicts of interest as reported on the Disclosure
Questionnaire, which all members of the writing group are required to complete and submit. A relationship is considered to be "significant" if (a) the person receives $10,000
or more during any 12-month period, or 5% or more of the person's gross income; or (b) the person owns 5% or more of the voting stock or share of the entity, or owns
$10,000 or more of the fair market value of the entity. A relationship is considered to be "modest" if it is less than "significant" under the preceding definition.
* Modest.
† Significant.
8. American Heart Association, American Stroke Association, International Liaison
The writing group gratefully acknowledges the leadership and
Committee on Resuscitation (ILCOR). GRADE presentations: SEERS Pre-
sentation Library; 2015.
contributions of the late Professor Ian Jacobs, PhD, as both ILCOR
. (accessed May 10, 2015).
Co-Chair and inaugural Chair of the ILCOR Methods Group. Ian is
9. The Cochrane CollaborationO'Connor D, Green S, Higgins J. Defining the
greatly missed by the international resuscitation community.
review questions and developing criteria for including studies. In: Higgins J,
Green S, editors. Cochrane handbook for systematic reviews of interventions.
2015. Version 5.1.0. 2011. (accessed May 6),
(Chapter 5).
11. Higgins JPT, Altman DG, Sterne J. The cochrane collaboration's tool for assessing
risk of bias. The cochrane collaboration. In: Higgins J, Green S, editors. Cochrane
handbook for systematic reviews of interventions. 2015. Version 5.1.0. 2011.
(accessed May 6), (Chapter 8.5).
12. Schünemann H, Bro ˙zek J, Guyatt G, Oxman A. 5 2 1 Study limitations (risk
of bias). In: GRADE handbook; 2013.
(accessed May 6, 2015).
4. Institute of Medicine. Standards for systematic reviews; 2011.
(accessed May 6, 2015).
5. Schünemann H, Bro ˙zek J, Guyatt G, Oxman A. GRADE handbook; 2013.
(accessed May 6, 2015).
16. Evidence Prime Inc. GRADEpro guideline development tool; 2015.
(accessed May 6, 2015).
7. GRADE Working Group. Organizations that have endorsed or that are using
GRADE. GRADE Working Group; 2015.
. (accessed May 10, 2015).
P.T. Morley et al. / Resuscitation 95 (2015) e33–e41
29. American Heart Association. American Stroke Association, International Liaison
Committee on Resuscitation (ILCOR). In: ILCOR Scientific Evidence Evalua-
tion and Review System (SEERS). American Heart Association; 2015.
(accessed May 10, 2015).
Source: http://risc.esvc.nl/wp-content/uploads/2015/10/c2.pdf
Pharmacological Reports Copyright © 2012 2012, 64, 205211 by Institute of Pharmacology Polish Academy of Sciences Influence of the phosphodiesterase type 5inhibitor, sildenafil, on antidepressant-likeactivity of magnesium in the forced swim testin mice Katarzyna Soca³a1, Dorota Nieoczym1, Ewa Poleszak2, Piotr WlaŸ1 1Department of Animal Physiology, Institute of Biology and Biochemistry, Maria Curie-Sk³odowska University,Akademicka 19,PL 20-033 Lublin, Poland
Whitsunday Regional Council Subordinate Local Law No. 3 (Community and Environmental Management) 2014 Contents Whitsunday Regional Council Subordinate Local Law No. 3 (Community and Environmental Management) 2014 Preliminary Short title This subordinate local law may be cited as Whitsunday Regional Council Subordinate Local Law No. 3 (Community and Environmental Management) 2014.