Converting categorical risk estimates into continuous effects in environmental health systematic reviews and meta-analyses
Introduction
In epidemiological research, systematic reviews and meta-analysis are fundamental methodological tools that enable researchers to identify, critically appraise, and synthesize findings from independent studies, in order to answer a specific research question. By aggregating data across diverse populations, settings, and study designs, these approaches increase statistical power, improve estimates of effect size, and help to resolve inconsistencies in the literature. Moreover, they provide a transparent and reproducible framework for evidence evaluation, which is particularly valuable in guiding public health policy and clinical practice, especially when individual studies yield conflicting or inconclusive results.1
In recent years, the importance of meta-analysis has grown significantly within environmental epidemiology, a discipline largely based on observational study design, substantial methodological heterogeneity, varying exposure metrics, and the inherent complexity of interactions between multiple environmental factors and human health outcomes. Environmental epidemiology assesses the health impacts of exposures to air pollutants, water contaminants, chemical and physical agents, and climate-related stressors. These exposures are typically multifactorial, variable over time and space, and are evaluated using heterogeneous methods, units of measurement, and classification systems schemes, factors that make it difficult to directly compare and combine results from different studies.2
A particularly critical issue in this domain is the uncertainty associated with exposure assessment. In environmental epidemiology, exposures are assessed using various approaches, including environmental monitoring networks, dispersion models, remote sensing, geographic proxies, or self-reported data, each with its own limitations and measurement errors. Exposure levels are often reported using different units of measurement or categorized into study-specific – and sometimes arbitrary – exposure intervals. This lack of standardization complicates the comparison across studies and hinders the aggregation of effect estimates, potentially introducing systematic bias into meta-analytic results. Such variability may obscure true exposure-response relationships and limit the ability to draw reliable and generalizable conclusions from the available evidence.
Given these multiple sources of heterogeneity and uncertainty, synthesizing the available evidence through systematic reviews in this context becomes particularly challenging, especially when the goal is to distinguish between harmful and non-harmful exposures in order to inform evidence-based decision making.
In particular, meta-analytical techniques in environmental health research need to properly account for several methodological aspects such as the standardization of heterogeneous exposure definitions, the harmonization of risk estimates derived from different exposure categorizations, the evaluation and quantification of between-study heterogeneity, and the systematic errors typical of observational studies (i.e., confounding, selection bias, the potential misclassification in exposure and outcome measurement).
A key application of meta-analysis in environmental health is its use in Health Impact Assessments (HIA), which estimate the potential health effects of environmental interventions. In air pollution epidemiology, quantitative meta-analyses provide continuous risk estimates (e.g., beta values per coefficients per pollutant unit), which are essential for disease burden calculations and predictive modelling.
These continuous estimates allow for more flexible and accurate assessments tailored to specific populations and contexts, while also improving the transparency, reproducibility, and credibility of public health recommendations.
In a scenario where environmental exposures are increasingly complex, interconnected, and influenced by socioeconomic, climatic, and behavioural factors, the ability to rigorously synthesize heterogeneous evidence becomes a priority. In this context, advanced meta-analytical methodologies capable of harmonizing different exposure categorizations and transforming categorical effect estimates into continuous ones are crucial for generating flexible, policy-relevant, and locally applicable evidence.
Several approaches have been proposed to address these challenges, including methods for setting midpoints of exposure categories,3,4 the procedures for estimating continuous trends from categorical data,5 assumptions of linear or non-linear exposure-response relationships, considerations about independency of category-specific risks, and the application of bootstrap techniques to estimate confidence intervals6. While these methods differ in their operational assumptions, most can be considered valid within specific analytical contexts and depending on the exposure-outcome relationship under investigation. Notably, previous studies have suggested that the choice between linear and non-linear assumptions for exposure-response functions does not substantially bias the overall meta-analytic estimates.3
A structured, transparent, and replicable approach is here proposed to transform categorical risk estimates, typically reported for predefined exposure intervals, into continuous effect estimates per unit of exposure. This method enables the inclusion of studies with heterogeneous exposure categories in quantitative meta-analyses, thereby enhancing comparability and allowing for more comprehensive evidence synthesis.
To illustrate the proposed method, a detailed application to the study by Mataloni et al. (2016)7, which investigated health effects associated with residential exposure to municipal waste landfills in the Lazio region, is provided. This dataset offers a unique advantage, because exposure to hydrogen sulphide (H2S) was reported both as categorical intervals and as a continuous variable, allowing to demonstrate the conversion process in a real-world scenario. The method has already been applied in a recent meta-analysis on health effects of waste treatment facilities, where it enabled the inclusion of multiple studies with heterogeneous exposure categorizations.8 Here, the procedure in details using data by Mataloni is presented to highlight its practical utility in harmonizing diverse exposure metrics and its contribution to strengthening the methodological framework for quantitative evidence synthesis in environmental epidemiology and public health.
Methods
Studies included in meta-analyses often use different exposure categories to quantify relative risk or other effect measures (e.g., odds ratio, hazard ratio), making direct comparison of results challenging. To address this issue, it is proposed a method in which a continuous risk estimate is calculated for each study and outcome, expressed per unit increase in the exposure of interest (e.g., 1 ng/m³ of H2S). A common approach is to assume that the true effect measure is the same across all exposure categories and that the observed variations are solely due to within-category sampling error (random error). This means that the health outcome of one category is not influenced by the outcome of another category, since they derive from the same population. Another assumption is that the exposure-outcome effect increases linearly within each category. This implies that all values within a category are associated with the same increase in risk (e.g., relative risk).
Step 1. Determination of midpoints and conversion of categorical estimates into continuous beta coefficients
Under the assumption of constant linear effect within each category, the midpoint is an appropriate summary measure of the interval for deriving a representative average estimate of the effect. To estimate a unitary category-specific effect (for example, a relative risk or a hazard ratio), the original effect estimate for the category can be normalized by dividing it by the midpoint value of that category. The first step in obtaining a continuous risk estimate for each study and outcome, expressed per-unit increase in exposure, is therefore to derive mid-point values by averaging the lower and upper limit of category-specific exposure levels, typically defined as range.
If the lower limit of the reference exposure category is not provided by the authors, it should be set to a predefined minimum considered to be plausible according to the limit of detection (LOD) of the exposure instrument (e.g., 0). The upper limit of the highest exposure category is set to the maximum exposure value reported in the original study. In the case of an open-ended upper category, the upper limit can be estimated by adding a plausible width to the lower limit of this category – for example, three-quarters (¾) or the total of the width of the previous category, as suggested by other authors.4 At the end, a single midpoint exposure value is calculated for each exposure category, which will then be used to calculate the category-specific unit effect.
The next step is to calculate continuous beta coefficients (β1) from the categorical beta coefficients of each exposure level from the original study. The original beta coefficients represent the estimated change in risk for each exposure level compared to the reference one under the above-mentioned assumption that risk increases of a constant linear amount within each category. They need to be converted into unitary category-specific beta coefficients expressing the increase in risk for each unit increase in exposure (e.g., 1 ng/m³ of H2S). This process may first require applying a logarithmic transformation to the risk estimates reported in the original study if expressed in exponential form (e.g., relative risk, hazard ratio) to obtain beta coefficients for each non-reference category (i>1). Each categorical beta coefficient is then normalized to the beta coefficient per unit by dividing the category-specific log-transformed risk by the exposure range between category i and the reference category (as defined by the difference in their midpoint values), as shown in Equation 1.
Under the assumption of fixed effect variation across exposure categories, there is only one level of sampling, since all categories are sampled from a population with effect size μ and variance σ² is the within categories variance depending primarily on the sample size for each study:
β ~ N (μ, σ²)
The category-specific observed effects (beta coefficients) are determined by the common effect μ plus the within-study error εi. The per-unit beta coefficients for each exposure category are then given by:
Equation 1
where:
Ä is the midpoint exposure for the category i > 1;
Ä1 is the mosure for the reference category;
ln(RRi) is the categorical exponential form (e.g. Relative Risk) for the exposure catÌegory i (where i > 1) retrieved from the original publication.
Step 2. Calculation of study-specific continuous beta coefficients
The second step is to provide a single unitary beta coefficient within a study for a specific exposure-health outcome association, by assuming that category-specific risk estimates originate from the same underlying population distribution, with observed differences arising solely from within categories random variability. This implies that the health outcomes across categories are realizations of the same random variable. Under this assumption, to summarize the effect across all non-reference exposure categories within a study, a single continuous beta coefficient is calculated as a weighted average of the category-specific beta coefficients. The weight of each category is defined by the proportion of the number of events observed in that category (cáµ¢) relative to the total number of events across all non-reference categories, as shown in Equation 2. The resulting coefficient represents a study-specific continuous effect estimate, expressed per unit increase in exposure, for the given study and health outcome under consideration.
The weighted average of the category-specific effects provides a consistent estimator of the normally distribution βi (Equation 1).
Equation 2
where:
βi per unit is the category-specific continuous beta coefficient defined in step 1;
ci is the number of events of the specific health outcome within the exposure category;
i >1 where 1 is the reference category.
Step 3. Calculation of study-specific variance
To calculate the standard error and confidence interval for the study-specific continuous β coefficient, the overall variance of this continuous effect estimate needs to be calculated, as shown in Equation 3.a and 3.b. First, the variance of the original log-transformed risk estimates is retrieved directly from the study, if available, or otherwise calculated from the reported confidence intervals. Then, for each non-reference exposure category, a category-specific variance associated with the corresponding continuous βi per unit is derived using the formula of the variance of a random variable multiplied by a constant, i.e., Var(aX)=a2·Var(X) where the constant a is computed by multiplying the category-specific weights by the inverse of the difference between the category midpoint and that of the reference category (equation 3.a). Under the assumption that category-specific risk estimates originate from independent samples, the covariance between category-specific risks 2Cov (ÌβÌi, ÌβÌj) ∀ i > 1, i ≠j can be assumed to be zero. Then, the variance of the study-specific unitary beta-coefficients is given by the formula for the sum of independent random variables, where the variance of their sum is the sum of their individual variances (Equation 3.b). This variance is subsequently used to calculate the standard error and to derive the 95% confidence intervals for the continuous study-specific per unit risk estimate.
Equation 3.a
Equation 3.b
where:
Var [ln(RRi)] is the variance for each exposure category i (where i > 1 retrieved from the original publication
ÌÄi is the midpoint exposure for the category i > 1
ÌÄ1 is the midpoint exposure for the reference category
ci are the events within exposure category i > 1
Step 4. Assessment of Estimate Precision Using Relative Standard Error (RSE)
After deriving the study-specific (or outcome-specific within the same study) continuous effect estimate and its standard error, it is essential to evaluate the reliability of the estimate. For this purpose, the Relative Standard Error (RSE) was calculated; it is defined as the ratio between the standard error (SE) of the estimate and the estimate itself:9,10
Equation 4
where:
βi is the continuous beta coefficient per unit exposure derived in Step 2
SE(ÌβÌi) is the standard error of the estimate.
The absolute value ensures that the ratio is positive regardless of the sign of the effect.
The RSE expresses the magnitude of uncertainty relative to the estimate itself. Lower values indicate higher precision, while higher values suggest instability. This diagnostic step is critical before pooling estimates in meta-analysis, as it prevents undue influence of highly uncertain results.
Practical example: Application of the method to a sample dataset from the published paper “Morbidity and mortality of people who live close to municipal waste landfills: a multisite cohort study”
The Authors of this paper recently conducted a systematic review with meta-analysis to investigate the health effects associated with residential exposure to municipal solid waste incinerators.8 The aim was to produce a quantitative estimate of the risk of health effects linked to such exposure. One of the main challenges faced in the meta-analysis was the considerable heterogeneity among the included studies, especially regarding exposure assessment. The studies considered a wide variety of pollutants, and even among those focusing on the same pollutant, exposure categories of effect estimates (e.g., relative risks) varied widely in definition and classification.
A minimum of three studies examining the same health outcome and exposure to the same type of pollutant were required in order to conduct the meta-analysis. This requirement significantly limited the number of eligible studies. Additionally, among the few studies that met the inclusion criteria, the different exposure categories employed in each study prevented direct comparison of effect estimates, making a meta-analysis not feasible.
To overcome this limitation, the proposed method was applied in cases where the same exposure (e.g., PM10) was examined across studies for a given health outcome, but exposure metrics used to calculate relative risk varied. This approach allowed us to convert categorical risk estimates into continuous per-unit risk estimates for each study, enabling their inclusion in a pooled quantitative meta-analysis.
To illustrate the proposed approach, the method was applied to the cohort study conducted by Mataloni et al. (2016)7, which investigated the health effects associated with residential exposure to municipal waste landfills in the Lazio Region (Italy). This study enrolled over 240,000 individuals living within 5 km of nine landfills and assessed exposure using a dispersion model for hydrogen sulphide (H2S), considered a tracer of landfill emission. Health outcomes were analysed through Cox proportional hazard models. A key feature of this dataset is that risk estimates were reported both for categorical exposure intervals (quartiles of H2S concentration) and for continuous exposure, expressed per 1 ng/m³ increase in H2S. This dual reporting allowed to demonstrate the conversion process from categorical to continuous effect estimates in a real-world scenario. To illustrate the conversion process, the focus was on four cause-mortality outcomes of interest: natural causes, all cancers, cardiovascular diseases, and respiratory diseases.
The R script used for this analysis, along with an Excel file containing the relevant formulas and fictitious example data, is provided in the Supplementary Materials available online.
The original publication reported hazard ratios (HRs) and 95% confidence intervals (95%CI) for each outcome across four exposure categories (quartiles of H2S concentration) reported in Table 1.
To harmonize these categorical estimates and derive a continuous effect per unit increase in H2S (1 ng/m³), the following steps were applied.
As a first step, midpoints (indicated as midpoints in the R and Excel files) for each category of exposure were calculated.
Subsequently, continuous per unit beta coefficients (beta_cat) were calculated for each category: the HR for the 25°-50° percentile, 50°-75° percentile, and >75° percentile exposure categories, as described in Equation 1. First, as reported in step 1 of the method, each exposure category was assigned a value representing the inverse of the exposure contrast (e.g., difference between the midpoint of that category and the midpoint of the reference category) (referred to as mid_diff in the R script and Excel file). The continuous beta coefficient for each exposure category was then derived by multiplying the log-transformed hazard ratio estimates retrieved from the original study (log_transform) by the corresponding mid_diff. Results are shown in Table 2.
As detailed in the step 2 (section Calculation of Study-Specific Continuous Beta Coefficients), a single study-specific continuous beta coefficient (beta_study in R and Excel) was finally derived, by calculating the weighted average of the category-specific beta coefficients. The weight for each category was determined based on the proportion of events in that category relative to the total number of cases across all non-reference categories as reported in equation 2 (weight in R and Excel). Subsequently, the risk estimates for each outcome, expressed per 1 ng/m³ increase in H2S, was derived by exponentiating the resulting beta coefficient. Results are shown in Table 3.
The study-specific variance (var_study) was calculated to derive the standard error and confidence interval for the obtained risk estimate. Following the methods described in step 3, the standard error for each exposure category (se_cat) was calculated based on the confidence intervals reported in the original study (resulting values are shown in Table 2). Then, following the formula described in Equation 3.a and 3.b, the variance of study-specific per unit risk was calculated as the weighted sum of the original category-specific variances (se_cat²), each multiplied by the squared product of the inverse of mid_diff, and the weight of the corresponding category (1/mid_diff x weight)² where weight are defined from the number of events observed in that category (ci) and the total number of events in the categories different from the reference one, as shown in Equation 2. The standard error of the study-specific continuous beta coefficient (se_study) was then derived by the square root of this variance and used to calculate the 95% confidence intervals for the continuous per unit study-specific risk estimate (low95_study; up95_study).
Finally, to evaluate robustness of study specific pooled effect estimate, the RSE (Equation 4) was calculated as the ratio of the standard error to the absolute value of the beta coefficient. Results are presented in Table 3.
This example illustrates the practical application of the proposed method for converting categorical risk estimates, typically reported for broad or study-specific exposure intervals, into a standardized continuous effect estimate. By doing so, it becomes possible to harmonize data across studies that use differing exposure definitions, units, or thresholds. This not only enhances the comparability of effect estimates, but also allows for their inclusion in a unified meta-analytic framework. As a result, the method addresses a common source of heterogeneity in environmental epidemiology and improves the robustness, precision, and interpretability of pooled risk estimates derived from diverse studies. There is no agreement about which limit of RSE consider assuming that variability in the estimate is not excessive. In general, it should be as smaller as possible (e.g., <1) and the limit should be identified specific for the data and health topic considered. In this example, this value could between 2 and 3, because the strata-specific estimate were affected by large variability and, consequently, the pooled estimate could be partially reliable. For this reason, estimates with RSE exceeding predefined thresholds should be flagged as unreliable and considered for exclusion in sensitivity analyses. Incorporating this criterion helps ensure that meta-analytical results are based on robust and informative estimates, reducing the risk of bias due to imprecise data.
Comparison with Existing Approaches and Methodological Refinements
To evaluate the performance of the proposed method, a comparison was conducted with two approaches for converting categorical risk estimates into continuous effects: the method by Greenland & Longnecker (1992)5 and the approach described by Hartemink et al. (2006)3. Both methods were applied to the same dataset from Mataloni et al. (2016)7, considering the same four outcomes of interest previously mentioned: mortality from natural causes, all cancers, cardiovascular diseases, and respiratory diseases.
The method presented in this article assumes not only linearity in effects within exposure categories, but also that the effect changes in a linear and constant manner across categories. This implies that the difference in effect between adjacent categories is assumed to follow a constant linear trend, without allowing for potential non-linear changes in risk. When pooling the category-specific effects, the variance of original estimates is used to obtain the variance of pooled effects, by assuming independence between category-specific effect estimates. Exposure categories are represented by midpoints, and weights are based on the observed number of cases (details provided previously).
The Greenland & Longnecker approach corrects for the correlation between category-specific estimates, which arises, because all category-specific estimates share the same reference group. By reconstructing an empirically derived variance-covariance matrix, this method provides more accurate variance estimates and slightly more efficient confidence intervals.
The Hartemink method applies a simplified procedure based on assigning expected exposure values (e.g., from a fitted Gamma distribution) and performing a weighted linear regression without accounting for covariance or uncertainty propagation. As a result, confidence intervals tend to be unrealistically narrow.
Figure 1 shows HRs per 1 ng/m³ increase in H2S are shown for the original study (Mataloni 2016)7, the proposed method, Hartemink (2006)3, and Greenland & Longnecker (1992)5, with 95% confidence intervals.
Across all outcomes, the beta coefficient estimates obtained with the three methods were similar (confidence intervals overlapped), indicating that the choice of method does not substantially affect the central trend when the exposure-response relationship is approximately linear. However, differences in precision were observed. Hartemink’s method produced the narrowest confidence intervals, reflecting the absence of variance propagation from category-specific estimates. This limitation may lead to overconfidence in the results. The proposed method and the Greenland & Longnecker approach yielded wider and more realistic confidence intervals, as both incorporate variability from the original effect estimates. Greenland’s method provided slightly more precise estimates and values closer to those reported by Mataloni, likely due to its explicit modelling of covariance. The difference between the estimates obtained with the proposed method and those from Greenland & Longnecker was not statistically significant, as confirmed by a formal test based on zeta score distribution.
The comparison highlights that, while all three methods converge on similar pooled effect estimates, the way in which the original variance of category-specific effect estimates is critical for reliable inference. Methods that ignore covariance or category-level uncertainty, such as Hartemink, risk overstating precision. Greenland & Longnecker remains a reference approach for meta-analyses requiring maximum efficiency, whereas the method presented here offers a computationally efficient and robust alternative that is more readily usable by researchers without high-level statistical training.
In environmental epidemiology, exposure categories are often defined inconsistently across studies, and the highest category frequently lacks an upper limit. To address this issue, in “Step 1. Determination of Midpoints and Conversion of Categorical Estimates into Continuous Beta Coefficients”, it was proposed an approach for assigning a reasonable upper limit to open-ended categories. Moreover, the midpoint of each exposure category was used as the representative exposure value for subsequent steps of the method. Both the specification of the upper limit and the choice of representative category value involve arbitrary decisions and alternative approaches have been described in the literature. To evaluate the robustness of proposed method to these methodological assumptions, it was applied under several different scenarios and compared the resulting estimates.
Specifically, two methodological aspects were examined:
1. the definition of the upper limit of the highest exposure category;
2. the method used to determine the representative central value of each category.
For the first aspect, the proposed method was applied under three alternative definitions of the upper limit of the open-ended category. First, the upper limit was set as the maximum exposure value reported in the original study. Second, the upper limit was estimated by adding the total width of the previous category to the lower bound of the open-ended category. Third, the upper limit was recalculated following the approach proposed by Doi et al. (2014)6, which assumes that the width of the highest category is a multiple (τ) of the width of the second-highest category. According to Doi and colleagues6, it was set τ = 2.
For the second aspect, the results obtained were compared by applying proposed method using two different approaches: 1. using the midpoint of each category as its representative exposure value; 2. using an alternative method proposed by Hartemink et al. (2006)3. In this latter approach, the average exposure within each category is estimated by fitting a statistical distribution (i.e., a Gamma distribution) to the available empirical data, producing a more realistic estimate of the category-specific average, since no uniform distribution of exposure values within each category is assumed, thus overcoming the difficulties associated with open-ended intervals.
The method was applied under these different scenarios and summarized the results in figure 2.
These graphs showed that the estimated standard error of the pooled continuous beta coefficient varied minimally across scenarios and was generally only slightly sensitive to changes in category limits or central values. It was also observed that larger exposure category widths were associated with smaller estimated standard errors coherently with the formula for the variance in the proposed method (Equation 3a), where the original variance of each categorical effect estimate was weighted by the inverse of the squared exposure contrast of the category; i.e., the difference between the representative exposure value of each category (Ei) and that of the reference category (E0). Given that the square of this difference is in the denominator of the weight, larger exposure categories lead to greater Ei – E0 values, producing a smaller overall variance compared with narrower exposure categories.
Overall, these results suggest that the proposed method is robust to variations in the definition of exposure category limits and in the choice of representative category values, showing consistent results in terms of the estimated pooled effects and standard errors and, consequently, confidence intervals.
Discussion
This paper presents a new method for pooling category-specific estimates into continuous effect measures per unit of exposure, particularly applicable to systematic reviews and meta-analyses in environmental health. Using an empirical example from a study where both categorical and continuous effect estimates were available in the context of waste disposal site-related outcomes (Mataloni et al., 2016)7, the proposed method was shown to provide an unbiased estimate of the ‘true’ continuous effect. The method is simple and easy to apply, requiring only basic empirical data from the original study (i.e., effect estimates and their standard errors for each exposure category, number of cases, and category exposure boundaries), and it is computationally efficient. Its simplicity stems from not needing to model risk variation within exposure categories or the covariance between category-specific effects. Despite its simplicity, the method yields results consistent with more complex approaches, such as modelling linear risk variation across categories (Hartemink method)3 or estimating the variance-covariance matrix from empirical data (Greenland & Longnecker method)5. It was also found to be robust to different choices of category-specific exposure contrasts, particularly when handling open-ended categories. As suggested by Savitz et al.,11 robust evidence in environmental health is best obtained from multiple studies rather than relying on single studies, especially for certain environmental exposures associated with greater uncertainty in potential health effects. This is particularly relevant for exposures related to localized sources such as waste disposal sites or incinerators, where uncertainty relies in low-level pollutant concentrations, small size of the affected population, lack of control for individual-level confounders (e.g., smoking, BMI, when using health information system data), and poor mechanistic evidence. In such contexts, where even if a single study providing only modest signals of effect, pooling evidence from multiple studies offers a significant added value. This approach strengthens the association between environmental exposures and health outcomes and reduces ambiguity in evaluating potential causal relationships. Such evidence synthesis is necessary to guide public health actions concerning relevant environmental exposures.
From the proposed method, a general methodological suggestion was the calculation of the Relative Standard Error (RSE) when including, in a meta-analysis, continuous effect estimates derived from categorical data and to plan a sensitivity analysis in cases where the pooled estimate would include study-specific estimates characterized by high RSEs, such as those illustrated in our applied example. This practical approach could enhance the reliability in pooled effect estimates from meta-analysis in environmental health field, strengthening the evidence base for public health actions and informing future health impact assessment of intervention scenarios.
Conclusions
Synthesizing evidence from multiple studies provides substantial added value, particularly in the field of environmental health where exposures are often low-level, localized, and subject to residual confounding. High-quality systematic reviews and meta-analyses are essential to more accurately estimate the burden of disease related to environmental exposures.¹² The heterogeneity of category-specific estimates in the original studies should not be considered a major limitation, as pooling methods, such as the one proposed here, can account for these differences. By applying such methods, more valid and reliable estimates of specific environmental exposures can be obtained, reducing the risk of over or underestimating associated health risks, and ultimately contributing to progress in public health decision-making.
Conflicts of interest: none declared.
References
- Brestoff JR, Van den Broeck J. Systematic literature review and meta-analysis. In: Van den Broeck J, Brestoff JR (eds). Epidemiology: Principles and Practical Guidelines. Springer Nature 2013. doi: 10.1007/978-94-007-5989-3_25
- Deener KCK, Sacks JD, Kirrane EF et al. Epidemiology: A foundation of environmental decision making. J Expo Sci Environ Epidemiol 2018;28(6):581-89. doi: 10.1038/s41370-018-0059-4
- Hartemink N, Boshuizen HC, Nagelkerke NJD, Jacobs MAM, van Houwelingen HC. Combining risk estimates from observational studies with different exposure cutpoints: a meta-analysis on body mass index and diabetes type 2. Am J Epidemiol 2006;163(11):1042-52. doi: 10.1093/aje/kwj141
- Lange S, Llamosas-Falcón L, Kim KV et al. A dose-response meta-analysis on the relationship between average amount of alcohol consumed and death by suicide. Drug Alcohol Depend 2024;260:111348. doi: 10.1016/j.drugalcdep.2024.111348
- Greenland S, Longnecker MP. Methods for trend estimation from summarized dose-response data, with applications to meta-analysis. Am J Epidemiol 1992;135(11):1301-9. doi: 10.1093/oxfordjournals.aje.a116237
- Doi K, Mieno MN, Shimada Y, Yonehara H, Yoshinaga S. Methodological extensions of meta-analysis with excess relative risk estimates: application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy. J Radiat Res 2014;55(5):885-901. doi: 10.1093/jrr/rru045
- Mataloni F, Badaloni C, Golini MN et al. Morbidity and mortality of people who live close to municipal waste landfills: a multisite cohort study. Int J Epidemiol 2016;45(3):806-15. doi: 10.1093/ije/dyw052
- Bottini I, Vecchi S, De Sario M et al. Residential exposure to municipal solid waste incinerators and health effects: a systematic review with meta-analysis. BMC Public Health 2025;25(1):1989. doi.org/10.1186/s12889-025-23150-z
- National Center for Health Statistics. Relative standard error (RSE). Available from: https://www.cdc.gov/nchs/hus/sources-definitions/rse.htm
- European Union. Method of computing relative standard errors (CV). Available from: https://ec.europa.eu/eurostat/cache/metadata/Annexes/aei_pestuse_esqrspu_pl_an_2.pdf
- Savitz DA, Wellenius GA. Consequential (and inconsequential) environmental epidemiology. Environ Epidemiol 2025;9(6):e433. doi: 10.1097/EE9.0000000000000433
- Sheehan MC, Lam J. Use of Systematic Review and Meta-Analysis in Environmental Health Epidemiology: a Systematic Review and Comparison with Guidelines. Curr Environ Health Rep 2015;2(3):272-83. doi: 10.1007/s40572-015-0062-z