Introduction and Literature Review

Description

write an Introduction and Literature Review. and use two or three resources from the resources I attached. see the example at the end of the assignment structure document. I am going to build research on after this assignment so this like the base of the research. my topic is about global warming

INTRODUCTION + LIT. REVIEW: 300 POINTS TOTAL
An Introduction is when you provide your readers with the background necessary to see the particular
topic of your research in relation to a general area of study. In the Introduction (Stage I), you will need to
1) begin with accepted statements of fact related to your general area; 2) identify one subarea (includes
your topic) within your general area; 3) write a thesis statement indicating interest/value.
The Literature Review is a critical analysis of sources related to your research topic. Your Literature
Review will consist of four stages: Stage II (the literature review itself) – more specific statements about
the aspects of the problem already studied by other researchers; Stage III (the gap statement) statement(s) that indicate the need for more investigation.
You will also include a Conclusion. This will be information about your present research that will drive
the contribution to the field. You will include:


Stage IV – (the purpose statement) – very specific statement(s) giving the purpose/objectives of
your study;
Stage V – (the value statement)- statement(s) that give a value or justification for carrying out
the study.
Evaluation:
See the Assignment Guidelines and Rubric Sheet for more details (below).
Example:
See below.
EVALUATION RUBRIC: INTRODUCTION + LITERATURE REVIEW
Stage I is when you provide your readers with the background necessary to see the particular topic of your research in relation to a
general area of study. In Stage I, you will need to 1) begin with accepted statements of fact related to your general area; 2) identify one
subarea (includes your topic) within your general area; and 3) indicate your topic.
Category
Excellent


Content
50 pts

Contains clear statement(s)
concerning the general area
Contains clear statement(s) focusing
on one subarea of the general area of
the study
Contains clear sentence(s) indicating
research topic
Average



50 – 45 pts


Organization
20 pts

Organized clearly and coherently
Illustrates how ideas are connected
and developed
Effective use of transitions



20 – 18 pts
Poor
Contains statement(s) concerning the
general area; however a bit work is
needed
Contains statement(s) focusing on
one subarea of the general area of the
study; however a bit work is needed
Contains sentence(s) indicating
research topic; however a bit work is
needed
44 – 34 pts

Needs a bit work with organization
and coherence
Ideas need to be better connected and
developed
Needs some work with transitions
17 – 12 pts





Statement(s) concerning the general
area need(s) a lot of work or is/are
missing
Statement(s) focusing on one subarea
of the general area of the study
need(s) a lot of work or is/are missing
Sentence(s) indicating research topic
need(s) a lot of work or is/are missing
33 – 0 pts
Is not organized clearly and
coherently
Ideas are not well connected and
developed
Transitions are not used
11 – 0 pts
APA and documentation
15 pts

No issues with APA formatting, font
style and size, and in-text citations

A few issues with APA formatting,
font size and style, and/or in-text
citations
14 – 9 pts

Many issues with APA formatting,
font size and style, and/or in-text
citations
8 – 0 pts

A few issues with one or two of the
following: use of different sentence
structures; use of tense; grammatical
error avoidance; and appropriate
academic word choice;
14 – 9 pts

Many issues with the following: use
of different sentence structures; use of
tense; grammatical error avoidance;
and appropriate academic word
choice
8 – 0 pts
15 pts

Sentences, vocabulary,
Grammar
15 pts



Effective use of different sentence
structures
Appropriate use of tense
Grammatical error avoidance
Appropriate academic word choice
15 pts
Sub-score = __________________ x 3 ____________________ (final score)
EXAMPLE INTRODUCTION AND LITERATURE REVIEW *see formatting (pp. 2-3)
*This is an example with a combined Introduction and Literature Review
Local Residents’ Attitudes towards Ecotourism and Conservation in National Parks:
a Case Study of the National Park for Amur Tiger and Amur Leopard in China
XXXX
Oklahoma State University
Local Residents’ Attitudes towards Ecotourism and Conservation in National Parks:
a Case Study of the National Park for Amur Tiger and Amur Leopard in China
Introduction and Literature Review
Protected area is known as an effective way to achieve long-term conservation of nature.
National park system is one category of the protected areas that not only is good for natural
conservation but also can contributing to people’s livelihoods, especially at the local level. It
takes into account the needs of residents and local communities, who could influence the
management and efficiency of the park. Aiming to protect China’s large natural ecosystems, the
central government of this country unveiled the overall plan for establishing its new national
park system in 2017 in order to achieve a combination of ecological protection and sustainable
development. Therefore, in China various measures need to be done at the local level, and it is
also important to understand residents in national park, who might be great power in establishing
the new system.
There has been some research conducted to study current situations of protected areas to
examine the necessity of a well-established national park system in China or to find
environmental attitudes and behaviors of local residents in protected areas in different countries
(Imran, Alam, & Beaumont, 2014; Sirivongs & Tsuchiya, 2012; Xu et al., 2017; Yuan, Dai, &
Wang, 2008) Xu et al. (2017) investigated China’s protected areas on biodiversity and ecosystem
services. They put forward four key regulating services of ecosystem: water retention, soil
retention, sandstorm prevention, and carbon sequestration. They found that though China’s
natural reserves have involved 15.1% of the nation’s land territory, these reserves just include
10.2-12.5% of the source areas for the four key regulating services. This also indicated an
uneven spatial distribution. They believed that national park system would be a good solution to
these situations. Apart from national park establishment, some studies also indicate that local
support cannot be neglected because residents are a part of the national parks. Yuan, Dai, and
Wang (2008) conducted their study in China, observing that some factors like income from
collections of forest products and migrant labor led to residents’ negative attitudes towards the
conservation at reserves and pointing out that low level of education and low benefits made it
hard for ecotourism to get support from residents. On the contrary, Sirivongs and Tsuchiya’s
(2012) study conducted in Lao People’s Democratic Republic found out that more than half of
the residents were very positive about a national park, and they perceived some benefits like an
increase in public security and income generation from the park, which significantly influenced
their attitude and participation. Imran, Alam and Beaumont (2014) carried out their study in
Pakistan and investigated the factors that affected residents’ environmental orientations,
reporting that awareness and information, governance structure, and resource usage rights also
impacted people’s intention to participate in pro-environmental behavior. Although some
research has been conducted to explore the environmental attitudes and behaviors of residents or
stakeholders in protected areas among many countries, few studies have been done on the
national parks system in China.
Hence, the purpose of this report is three-fold 1) to find out local residents’ attitudes
towards ecotourism and conservation in national parks; 2) to investigate the factors that could
influence their attitudes and behaviors and 3) to discuss how to get support from residents to
build sustainable tourism as well as a good relationship with those residents in China’s national
parks. Such information may bring some practical suggestions for local government to make
decisions on national park area policies and regulations. It may also assist travel agencies, related
organizations and companies to have a better understanding on residents’ actual needs to develop
ecotourism programs and sustainable tourism business in national park areas that could bring
mutual benefits.
References
Imran, S., Alam, K., & Beaumont, N. (2014). Environmental orientations and environmental
behavior: Perceptions of protected area tourism stakeholders. Tourism Management, 40,
290-299.
Sirivongs, K., & Tsuchiya, T. (2012). Relationship between local residents’ perceptions, attitudes
and participation towards national protected areas: A case study of Phou Khao Khouay
National Protected Area, central Lao PDR. Forest Policy and Economics, 21, 92–100.
Xu, W., Xiao, Y., Zhang, J., Yang, W., Zhang, L., Hull, V., …Ouyang, Z. (2017). Strengthening
protected areas for biodiversity and ecosystem services in China. Proceedings of the
National Academy of Sciences of the United States of America, 114(7), 1601-1606.
Yuan, J., Dai, L., & Wang, Q. (2008). State-Led Ecotourism Development and Nature
Conservation: A Case Study of the Changbai Mountain Biosphere Reserve, China. Ecology
and Society, 13(2), 55.
*not all references shown in example
Environmental Communication
ISSN: 1752-4032 (Print) 1752-4040 (Online) Journal homepage: https://www.tandfonline.com/loi/renc20
Global Warming’s “Six Americas Short Survey”:
Audience Segmentation of Climate Change Views
Using a Four Question Instrument
Breanne Chryst, Jennifer Marlon, Sander van der Linden, Anthony
Leiserowitz, Edward Maibach & Connie Roser-Renouf
To cite this article: Breanne Chryst, Jennifer Marlon, Sander van der Linden, Anthony Leiserowitz,
Edward Maibach & Connie Roser-Renouf (2018) Global Warming’s “Six Americas Short
Survey”: Audience Segmentation of Climate Change Views Using a Four Question Instrument,
Environmental Communication, 12:8, 1109-1122, DOI: 10.1080/17524032.2018.1508047
To link to this article: https://doi.org/10.1080/17524032.2018.1508047
Published online: 23 Aug 2018.
Submit your article to this journal
Article views: 590
View Crossmark data
Citing articles: 1 View citing articles
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=renc20
ENVIRONMENTAL COMMUNICATION
2018, VOL. 12, NO. 8, 1109–1122
https://doi.org/10.1080/17524032.2018.1508047
RESEARCH ARTICLE
Global Warming’s “Six Americas Short Survey”: Audience
Segmentation of Climate Change Views Using a Four Question
Instrument
Breanne Chrysta,b, Jennifer Marlonb, Sander van der Lindenc, Anthony Leiserowitzb,
Edward Maibachd and Connie Roser-Renoufd
a
Department of Statistics, Yale University, New Haven, CT, USA; bYale Program on Climate Change Communication,
Yale University, New Haven, CT, USA; cDepartment of Psychology, Downing Site, University of Cambridge,
Cambridge, UK; dCenter for Climate Change Communication, George Mason University, Fairfax, VA, USA
ABSTRACT
ARTICLE HISTORY
Audience segmentation has long been used in marketing, public health,
and communication, and is now becoming an important tool in the
environmental domain as well. Global Warming’s Six Americas is a wellestablished segmentation of Americans based on their climate change
beliefs, attitudes, and behaviors. The original Six Americas model
requires a 36 question-screener and although there is increasing interest
in using these segments to guide education and outreach efforts, the
number of survey items required is a deterrent. Using 14 national
samples and machine learning algorithms, we identify a subset of four
questions from the original 36, the Six Americas Short SurveY (SASSY),
that accurately segment survey respondents into the Six Americas
categories. The four items cover respondents’ global warming risk
perceptions, worry, expected harm to future generations, and personal
importance of the issue. The true positive accuracy rate for the model
ranges between 70% and 87% across the six segments on a 20% holdout set. Similar results were achieved with four out-of-sample validation
data sets. In addition, the screener showed test-retest reliability on an
independent, two-wave sample. To facilitate further research and
outreach, we provide a web-based application of the new short-screener.
Received 13 February 2018
Accepted 10 July 2018
KEYWORDS
Six Americas; global
warming; segmentation
1. Introduction
The majority of climate scientists have concluded that human-caused climate change is happening
and poses serious risks to society (Cook et al., 2016; Pachauri et al., 2014). Addressing global climate
change will therefore require urgent and substantial changes in human behavior, decision-making,
and policy-support (van der Linden, Maibach, & Leiserowitz, 2015). In order to enable more effective
mitigation and adaptation, it is important to understand how the public thinks, feels, and acts on the
issue of global warming. The efficacy of public engagement programs typically improves with the
ability to tailor specific messages to a well-defined target audience (Hine et al., 2014). Tailored information is generally perceived as more credible, and it is more likely to be read and recalled. Communication campaigns that employ audience segmentation have proven successful not only in
changing perceptions, but in changing behavior, in domains from politics to public health (Harris,
Lock, Phillips, Reynolds & Reynolds, 2010; Maibach, Weber, Massett, Hancock & Price, 2006; Noar,
CONTACT Breanne Chryst
breanne.chryst@yale.edu
Department of Statistics, Yale University, New Haven, CT 06511, USA,
Yale Program on Climate Change Communication, Yale University, New Haven, CT 06511, USA
© 2018 Informa UK Limited, trading as Taylor & Francis Group
1110
B. CHRYST ET AL.
Benac, & Harris, 2007). The process of segmentation involves identifying (within a target population) relatively homogeneous subgroups that share similar psychographic profiles. Understanding
the unique beliefs, attitudes, and behaviors of each subgroup then allows for the development of tailored frames and communications. For example, prior research has shown that framing climate
change as a health, national security, or environmental issue can have diverging effects on different
audiences (Myers, Nisbet, Maibach, & Leiserowitz, 2012).
A comprehensive and well-known inventory of American views on global warming was conducted jointly by the Yale Program on Climate Change Communication and the George Mason University Center for Climate Change Communication (Maibach, Leiserowitz, Roser-Renouf, & Mertz,
2011). Since 2008, 14 nationally-representative surveys of American adults that include a large number of identical questions have been carried out (see Leiserowitz, Maibach, Roser-Renouf, Feinberg &
Rosenthal, 2016). An audience segmentation analysis of the first survey (n = 2164) that was applied
to each subsequent survey (n > 18,000) identified six unique “interpretive communities” within
society who each respond to the issue of global warming in their own distinct ways; the “Six Americas” (Leiserowitz, 2005; Maibach et al., 2011; Roser-Renouf, Stenhouse, Rolfe-Redding, Maibach &
Leiserowitz, 2014). The original Six Americas model used 36 variables to classify respondents into
segments based on Latent Class Analysis (LCA) using the LatentGold 4.5 software (Magidson & Vermunt, 2002; Maibach et al., 2011; Vermunt & Magidson, 2002). The survey items cover a wide range
of global warming beliefs, risk perceptions, policy support, and behaviors. The six segments range
from those very concerned about global warming to those who are strongly opposed to taking action
against global warming (Figure 1). The six segments are the Alarmed, the Concerned, the Cautious,
the Disengaged, the Doubtful, and the Dismissive. Each segment differs meaningfully in their beliefs,
attitudes, issue involvement, behaviors, and policy-preferences about climate change (Maibach et al.,
2011; Roser-Renouf et al., 2014). For example, the Alarmed segment includes those individuals who
are most convinced that human-caused climate change is happening, are highly engaged with the
Figure 1. Bar charts of the proportions of each segment in the March 2016 survey. The chart on the left contains the proportions
for the full 36 question screener survey and the chart on the right represents the 4-item SASSY screener results.
ENVIRONMENTAL COMMUNICATION
1111
issue, and ready to take action. In contrast, the Dismissive are on the other end of the spectrum, and
strongly believe that global warming is not happening or human-caused and actively oppose any
action on climate change (Maibach et al., 2011). Individuals in the middle four groups vary in
their sense of urgency or certainty about the problem, and tend to display lower personal and political engagement than those in the extreme categories (Leiserowitz, 2005).
The Six Americas instrument was originally developed to profile the American population, and
has been used for that purpose to support an array of research, education, and communication
efforts by scholars, practitioners, and decision makers (Akerlof, Bruff, & Witte, 2011; Costello,
2014; Fisher, 2014; Leiserowitz et al., 2016; Maibach et al., 2011). The method has also been used
to profile specific sub-populations, such as American zoo, national park, and aquarium visitors
(Kelly et al., 2014; Schweizer, Davis, & Thompson, 2013) and agricultural agents (Bowers, Monroe,
& Adams, 2016), it’s been used to study specific audience segments, such as the the Alarmed (Doherty & Webler, 2016), and used changes in audience composition as an outcome measure to assess
the impact of targeted education interventions (Flora et al., 2014).
Recently, a growing interest in the value of tailoring communications about climate change to
specific audiences has led to the development of segmentation studies around the world (Hine
et al., 2014). For example, one study in Australia identified four segments in the Australian population
based on knowledge of and concern about climate change (Ashworth, Jeanneret, Gardner, & Shaw,
2011). A more recent study identified “Six Australia’s”, analogous to the Six Americas (Morrison, Duncan, Sherley, & Parton, 2013). In Germany, five segments – notably missing a dismissive group – were
identified, based on beliefs, attitudes, and media consumption (Metag, Füchslin, & Schäfer, 2015). Six
segments were also identified in India, using clustering analysis, ranging from the Informed to the Disengaged (Leiserowitz, Thaker, Feinberg, & Cooper, 2013). A recent study in Singapore identified three
segments, the concerned, the disengaged, and the passive (Detenber, Rosenthal, Liao, & Ho, 2016) and
the British Broadcasting Company conducted a segmentation analysis with over 33,000 residents from
six countries in Asia (BBC Media Action, 2013) with the explicit aim of developing more effective communication strategies.1 Segmentation analyses have also been applied to various sub-populations, such
as US corn belt farmers’ views on climate change (Arbuckle et al., 2014). In short, regardless of the
population or exact number of segments, scholars around the world have found audience segmentation
to be a valuable tool, whether for assessing current issue understanding, developing communication
strategies, or developing new messages to advance dialogue and action.
The effectiveness of a segmentation tool depends on the development of a measure that is concise,
reliable, and valid in describing individual differences in public opinion, including cognitive and affective issue engagement, and behavior (Slater, 1996). This need, and the growing interest in climate
change audience segmentation in particular, motivates our current work. In particular, more empirical
research on shorter versions of the original 36-item screener have been called for (Hine et al., 2014),
especially since the length of the full 36-item survey may be prohibitive in many research studies.
Moreover, in light of “Big Data” opportunities, there is an increasing demand for short measures of
psychological constructs and scales that are not cognitively taxing or time-consuming. A shorter Six
Americas survey instrument would make segmentation feasible for diverse researchers, and would
allow them to quickly gauge the range and distribution of climate opinions held by audiences of interest. Previous work by Maibach, et al. identified a reduced set of 15 items, the Reduced Discriminant
Model Tool (RDM), for identifying the Six Americas (Maibach et al., 2011). The principal aim of
this research is to identify the smallest subset of the 15-item RDM Tool capable of identifying each
of the six segments with sufficient accuracy, defined as roughly 70% accurately categorized respondents
in each of the six segments on out-of-sample validation data (Fawcett, 2006; Kleinbaum & Klein, 2010).
Accordingly, we advance a new screener tool that achieves this goal with only four survey items.
1112
B. CHRYST ET AL.
Table 1. Dataset time and mode, domain, sample sizes, and purpose.
Survey Data
ID
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Time (and mode)
October 2008 (online)
January 2010 (online)
June 2010 (online)
May 2011 (online)
November 2011 (online)
April 2012 (online)
September 2012 (online)
April 2013 (online)
December 2013 (online)
October 2014 (online)
March 2015 (online)
October 2015 (online)
March 2016 (online)
November 2016 (online)
June 2013 (phone)
February 2017 (online)
Domain
US
US
US
US
US
US
US
US
US
US
US
US
US
US
Colorado
US
Type
Full
Full
Full
Full
Full
Full
Full
Full
Full
Full
Full
RDM
Full
RDM
Full
SASSY
Sample
2139
1001
1024
486
981
996
1058
1035
823
1272
1263
1329
1203
1226
780
241
Purpose
Test/Train
Test/Train
Test/Train
Test/Train
Test/Train
Test/Train
Test/Train
Test/Train
Test/Train
Test/Train
Test/Train
Validation
Validation
Validation
Validation
Test-Retest
Notes: RDM refers to the 15-item Reduced Discriminant Model from Maibach, et al. 2011. SASSY refers to the 4-item Six Americas
Short Survey described in this work.
2. Methods
2.1. Data and sample
To develop the Six Americas short survey we use data from 14 nationally representative opinion surveys of American adults conducted between 2008 and 2016 (Table 1). Thirteen of the fourteen surveys
were probability-based and conducted online (by GfK Knowledge Networks). One survey was conducted by telephone using the same wording as the online panel surveys (by Abt SRBI). All observations that were missing an assigned Six Americas segmentation were excluded from the analysis
(423 observations across 9 surveys). We focus our analysis on the fifteen questions from the Reduced
Discriminant Model (Maibach et al., 2011). In total, there are 41 item non-responses, roughly 0.3% of
the data. These missing observations were filled using hot deck imputation (Myers, 2011), which fills
the missing responses with those from respondents that are otherwise similar, based on their responses
to the other survey items (Cranmer, Gill, Jackson, Murr & Armstrong, 2016; Cranmer & Gill, 2013).
We evaluate the test-retest reliability of the model using observations collected by the George
Mason University Center for Climate Change Communication in February and March of 2017.
This survey was administered online with a 14–21 day follow-up. We focus our test-retest analysis
on the control group from this experiment (n=241). This research received ethical approval from the
Yale Institutional Review Board.
2.2. Empirical strategy
2.2.1. Variable selection using supervised machine learning
The construction of a classification procedure from a set of data for which the correct classes are known
is often referred to as supervised machine learning (Michie, Spiegelhalter, & Taylor, 1994). We
implement a version of this approach using the existing Six Americas segments as the known classes.
To validate our model we rely on cross-validation, a two-stage process including a training stage and a
testing stage. In the training stage, a classifier algorithm is developed using the responses (data points)
from individual participants and the correct categories associated with them to learn a specific pattern
for how the data points map onto the categories. Once the classifier is trained, it then acts as a function
to take in additional data points and produce the predicted classifications. We use cross-validation to
perform the analysis on one subset of the data (the training set) and subsequently evaluate the performance of the model on the other subset (the testing set). Finally, to avoid biasing our ultimate model
ENVIRONMENTAL COMMUNICATION
1113
choice, we validated the chosen model on held-out samples. In contrast with standard selection
methods (e.g. correlation, stepwise regression), a (supervised) machine learning approach can be evaluated by its effectiveness in making accurate predictions using new, independent samples.
Our model was trained and tested on the first 11 of the 14 surveys. The October 2015 survey,
which included just the 15 items from the Reduced Discriminant Model, was withheld from the
training of the model and used for validation purposes. The most recent surveys from March and
November of 2016 were also withheld for validation to test the accuracy of the final model. The
remaining 11 surveys were separated into two non-overlapping sets, 80% of the observations in a
training set and 20% in a testing set (Pentreath, 2015), using the R package caret, such that the distribution of the Six Americas segments was approximately constant between the two sets.
The analysis was conducted in R and focused on the 15 questions included in the Reduced Discriminant Model. A generalized boosted regression modeling (GBM) algorithm (200 trees, 10-fold cross
validation, multinomial distribution) was employed to identify the key variables for predicting segment membership (Kuhn, 2008). GBM is a broad method that uses classification and regression
trees (boosting refers to an ensemble method where final predictions are the result of aggregated predictions from individual models). For a user-friendly introduction to machine learning, regression
trees, and classification please see Strobl, Malley, and Tutz (2009). Whereas traditional ensemble techniques such as random forests rely on simple averaging of the models, gradient boosted machines consecutively fit new models to provide a more accurate estimate of the response variable (Natekin &
Knoll, 2013). Specifically, we implemented the GBM package in R (Friedman, 2001; Ridgeway, 2007).
2.2.2. Six Americas categorization using multinomial logistic regression
In the second part of the analysis, multinomial-logistic regression models were iteratively fit to the data
using the Six Americas segmentation as the dependent variable and the top variables from the previous
GBM procedure as independent variables. That is, the first model was fit using just the top variable; the
second with the top two, etc., the next variable on the list was added to the model in order until satisfactory accuracy, roughly 70%, was achieved (Bekkar, Djemaa, & Alitouche, 2013; Fawcett, 2006; Kleinbaum & Klein, 2010). Multinomial-logistic regression identifies group membership for multi-class
dependent variables (Venables & Ripley, 2013). The model achieved our pre-specified level of accuracy
using the top four variables. The top four variables were used in a multinomial-logistic regression model
that achieved the desired accuracy for both within- and out-of-sample test data. In short, we use a multinomial logistic regression to generate coefficients (odds-ratios) for each of the four questions. The (multiclass) dependent variable for the multinomial regression is the Six America’s (six categories). Based on
the responses to the 4 items, the odds-ratios predict the likelihood of membership into each of the six
categories (e.g. the odds of being in the Disengaged vs. the Alarmed). The highest odds-ratios across
the four questions are used to classify respondents into one of the Six Americas segments.
3. Results
Results indicate that four variables are sufficient to identify the Six Americas. The four questions
include respondents’ worry about global warming, risk perceptions of the impact that global warming will have on them personally and on future generations, and personal importance of the issue.
Our model achieves a minimum of 70% true positive rate, i.e. the proportion of accurately labeled
respondents, in each of the Six Americas segments on a test set and very similar results on four
out of sample validation sets. The model coefficients, listed as odds ratios with the Alarmed segment
as the base, are shown in Table 2. The coefficients can be interpreted as follows: If a respondent
answered “Not at all” versus “Don’t know” to the question “How much do you think global warming
will harm you personally?” the odds of being classified in the Dismissive segment (versus the
Alarmed) increase by a factor of 22.56 to 1.
To evaluate model performance we fit five separate testing/validation sets. The first set is the 20%
holdout from surveys 1–11, called the “test set.” The second, third, and fourth were all nationally
1114
B. CHRYST ET AL.
Table 2. Coefficients are listed as odds-ratios.
Multinomial-logistic Model
Six Americas segment:
How much do you think global warming will harm future
generations of people?
Not at all
Only a little
A mod. amt.
A great deal
How important is the issue of global warming to you
personally?
Not too
imp.
Somewhat
Very
Extremely
How worried are you about global warming?
Not very
Somewhat
Very
How much do you think global warming will harm you
personally?
Not at all
Only a little
A mod. amt.
A great deal
Akaike Inf. Crit.
Conc.
1.20
Caut.
34.71***
Diseng.
0.02***
Doubt.
17.61***
Dismis.
747.34***
(1.20)
0.29***
(.66)
1.38
(.31)
0.38***
(0.24)
19.06***
(1.23)
13.27***
(.65)
13.51***
(.37)
0.19***
(0.32)
8.54***
(6.82)
0.04***
(.93)
0.00***
(1.02)
0.00***
(0.59)
8.98***
(1.25)
8.21***
(.66)
0.71
(.39)
0.00***
(0.37)
3.73**
(1.27)
5.55***
(.71)
0.27**
(.54)
0.01***
(0.53)
2.14
(1.07)
3.13***
(.80)
0.37***
(.80)
0.08***
(.81)
0.87
(.76)
0.41***
(.71)
0.12***
(.71)
1.12
(1.06)
0.28***
(.79)
0.01***
(.80)
0.00***
(.87)
0.40***
(.77)
0.03***
(.72)
0.01***
(.75)
7.23***
(1.08)
0.49
(.83)
0.02***
(.86)
0.00***
(.99)
0.42*
(.80)
0.03***
(.77)
0.00***
(.85)
0.10***
(1.06)
0.04***
(.80)
0.00***
(.85)
0.00***
(.99)
0.09***
(.78)
0.00***
(.74)
0.00***
(1.29)
5.79***
(1.07)
0.02***
(.84)
0.00***
(.96)
0.00***
(1.14)
0.02***
(.79)
0.00***
(.93)
0.00***
(4.95)
22.56***
(.32)
0.61**
(.23)
0.58*
(.22)
0.28***
(.23)
(.39)
2.14**
(.30)
1.29
(.29)
1.31
(.33)
(.51)
0.05***
(.40)
0.05***
(.40)
0.06***
(.58)
(.41)
0.64
(.34)
0.58
(.36)
0.86
(.53)
(.53)
1.58
(.54)
2.09
(.67)
1.57
(.99)
11,869.920
Notes: Conc. is Concerned, Caut. is Cautious, Diseng. is Disengaged, Doubt. is Doubtful, and Dismis. is Dismissive. Reference Category: Alarmed. Reference response by question: Don’t know; Not at all important; Not at all worried; Don’t know. Standard errors
are provided in parentheses.∗ p , 0.05; ∗∗ p , 0.01; ∗∗∗ p , 0.001.
representative surveys, two that segmented respondents based on the 15 item RDM screener from
October 2015 and November 2016 (Appendix, Tables A1 and A3) and one using the full 36 item
survey from March 2016 (Appendix, Table A2). A fourth set was a representative state-wide survey
of Colorado conducted in 2013 (Leiserowitz, Feinberg, Howe, & Rosenthal, 2013) (Appendix,
Table A4).
Several model performance measures were calculated for the test set data, including confusion
matrices, precision, recall, F1 -scores, and average accuracy. The average accuracy, or the average
per-category effectiveness of the classifier, for the four questions screener is 0.91 (Sokolova & Lapalme,
2009). The confusion (or “error”) matrix describes the performance of a classification model on a set of
test data by listing the true positive, false positive, true negative, and false negative values of the model
fit. Here the values in the diagonal represent the true positives, the off-diagonal column entries are false
negatives, and the off-diagonal row entries represent false positives (Table 3). The precision metric
measures the class agreement with the labels, while recall measures the effectiveness of the classifier
in identifying true positives, and the F1 -score measures the relationship between the classifier and
the data’s labels. The macro-level versions (i.e. measures of the model fit over all six segments) are
each adjusted for multi-class classifiers using the sums of the per-class decisions (Sokolova & Lapalme,
2009). For reference, we list the expected value for each of the macro-level measures if the observations
ENVIRONMENTAL COMMUNICATION
1115
Table 3. Confusion matrix of the 36 item segment fit and the 4 item multinomial-logistic fit for the testing set (20% hold out from
surveys 1-11).
Confusion Matrix Test-set
Observed
Model
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
0.73
0.26
0.01
0.00
0.00
0.00
Concerned
0.09
0.73
0.16
0.03
0.00
0.00
Cautious
0.01
0.16
0.71
0.00
0.11
0.01
Disengaged
0.00
0.05
0.00
0.82
0.013
0.00
Doubtful
0.01
0.01
0.14
0.06
0.70
0.08
Dismissive
0.00
0.00
0.02
0.00
0.11
0.87
Table 4. Macro-performance measures for test set (20% holdout) data for multinomial logistic regression model.
Macro-Performance Measures
Model
Expected if Random
Macro-Precision
Macro-Recall
Macro-F1 -score
0.76
0.17
0.76
0.17
0.76
0.17
Table 5. Performance measures for test set (20% holdout data) for each segment.
Performance Measures by Segment validation data set
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Precision
Recall
F1 -score
0.73
0.73
0.71
0.82
0.70
0.87
0.79
0.72
0.70
0.76
0.67
0.89
0.76
0.73
0.71
0.79
0.68
0.88
were classified at random in Table 4 (Rickert, 2016). From Table 5 it is clear that the model performs
best in the Dismissive category, with precision, recall, and F1 -score all near 0.9, and least well in the
Doubtful segment, with all three measures around 0.7.
The model showed a high level of accuracy2 classifying respondents in (a) the test set (Table 3),
(b) the 15 item screener survey from October 2015 (Appendix, Table A1), and (c) on out-of-sample
surveys from March 2016 (Appendix, Table A2). The model has a true positive rate of at least 69% in
each of the six segments for all three nationally representative out of sample validation data sets. The
state-wide telephone survey from Colorado faired slightly worse, with a 62% true positive rate for the
Disengaged segment but higher (68%–85%) for all other segments. The lower accuracy on this
set may be due to the survey method, phone versus online or the limited geography of the sample
(Colorado versus national).3
We also evaluated the test-retest reliability of the model with an online study (n=241). The observations were taken 14–21 days apart, each using our short screener for segment identification. We
evaluated performance using pearson’s product-moment correlation coefficient. The correlation
between the two segment fits is (r=0.66); the full confusion matrix is listed in the appendix.
4. Discussion
In this study we have developed and validated a new four-item screener to accurately segment the
American public into one of Global Warming’s Six Americas, originally identified in Maibach
et al. (2011) with 36 items. Using these four questions, a true positive rate was achieved of around
70 percent across all six segments in the hold-out data, with the highest accuracy (>0.85) found
among the Dismissive. Moreover, although it is possible to increase overall classification accuracy
1116
B. CHRYST ET AL.
by adding more relevant questions to the screener, the performance of the 4-item tool compares
favorably to the much longer 15-item “Reduced Discriminant” screener, which reported an average
classification accuracy of 83.8 percent for the overall sample, ranging from 0.60 to 0.97 across the six
segments (Maibach et al., 2011). Across all samples, the average accuracy of the 4-item screener
ranges from 0.77 (Colorado phone survey) to 0.94 (October 2015 RDM). The lowest true positive
accuracy amongst online surveys occurs for the Doubtful segment, with the lowest hit rate of 0.69
in the March 2016 data (which is consistent with the 15-item tool from Maibach et al., 2011).
The lowest hit rate for the test-retest and telephone surveys occurs in the Disengaged category, having a true positive rate of 0.62 in the Colorado survey and 0.5 in the test-retest survey, though it is
important to consider that the Disengaged segment (as fit by the first survey and 36 question screener
respectively) represented a very small proportion of both samples, 1% and 2% respectively.
The current study also complements other recent research. For example, Swim and Geiger (2017)
highlight positive correlations between the full 36-item screener and self-categorization into the Six
Americas using a single-item (Swim & Geiger, 2017). Although promising, internal reliability cannot
be estimated when researchers use single items. Nonetheless, it is worth noting that although the
Spearman’s correlations between our four-item and the 36-item screener are substantially higher
(0.89–0.92 vs. 0.67–0.82), the test-retest reliability between Swim and Geiger’s (2017) single-item
measure and the current 4-item screener are similar (0.67 vs. 0.66).
The short segmentation tool provides a new, cost-effective survey instrument for understanding
diverse public perceptions about climate change that is consistent with the literature. For example,
risk perception, worry, and personal importance have long been identified as important predictors of
climate change engagement and policy support (Ding, Maibach, Zhao, Roser-Renouf & Leiserowitz,
2011; Malka, Krosnick, & Langer, 2009; Roser-Renouf et al., 2014; Smith & Leiserowitz, 2014; van der
Linden, 2017). While social marketing tools like audience segmentation are sometimes characterized
as forms of “data mining” that lack a theoretical basis, variations in beliefs, attitudes, and behaviors
identified through the Six Americas segmentation have already been shown to reflect meaningful
differences in risk perceptions and decision making across a variety of applications (Myers et al.,
2012; Roser-Renouf et al., 2014). Indeed, two well-established theoretical dimensions underpin
the Six Americas (Roser-Renouf et al., 2014), namely; attitudinal valence (the inclination to accept
or reject climate science) and issue involvement (cognitive and affective issue engagement). The 4item screener covers both attitudinal valence as well as issue involvement, which we define by the
extent to which people think about and have firm beliefs (i.e. attitudinal certainty) about the issue
of climate change (Roser-Renouf et al., 2014). For example, the SASSY screener accurately captures
the proportion of Disengaged as identified by the full screener (Figure 1). The disengaged are particularly characterized by a frequent “don’t know” response, which in our view reflects low belief certainty and low issue involvement. It is of course possible to maintain a different definition of issue
engagement. For example, some scholars have argued that many Americans might consider themselves disengaged for alternative reasons, such as prioritizing other issues, not knowing what to do
about climate change, or lacking a general sense of self-efficacy (Swim & Geiger, 2017). In other
words, our definition of the disengaged, in both the short and full screener, may underestimate
the true proportion of disengaged in the population if these broader dimensions are considered,
given that people’s self-categorization into the Six Americas may differ from the screeners’ (Swim
& Geiger, 2017). However, it is worth noting that in the current screener, those individuals who
do have opinions but aren’t engaged, are more likely to end up in the Cautious category. We therefore encourage further work to assess whether these definitional issues are consequential in accurately representing how people respond to information about climate change. Further research on
the Six Americas is likely to strengthen theoretical development of the literature on risk and science
communication, opinion leadership, attitude change, and social influence and persuasion.
Aside from the conceptual contribution, an interesting methodological question is how the
machine learning method adopted in the current paper compares to more traditional approaches
used in model selection, such as stepwise regression (where predictors are included or deleted one
ENVIRONMENTAL COMMUNICATION
1117
at a time in successive order). Forward or backwards stepwise regression is often used to select the
“best” model with a set of q predictors. Although there are parallels between tree building and stepwise regression, scholars increasingly warn against the use of stepwise regression methods because
they capitalize on sampling error, which leads to overfitting and poor out-of-sample prediction accuracy (Strobl, Malley, & Tutz, 2009; Thompson, 1995). Unlike stepwise regression, the gradient
boosted machine learning approach employed in the current paper has the advantage of averaging
over a large number of (decision tree) models.4 Furthermore, in comparison, a stepwise regression
approach selected a model that retained nearly all of the 15-item screener variables. This is not
entirely surprising, as the inclusion of more variables leads to a relatively greater amount of
model fit (e.g. as measured by R2 ). Yet, the goal of the SASSY screener is to accurately classify respondents in new samples with the least number of questions rather than optimizing the amount of variance the variables can explain in current samples. In addition, stepwise regression methods also
suffer from order effects whereas the advantage of ensemble methods (which employ parallel tree
models) is that the order effects counterbalance, so that the overall importance ranking of the variables is much more reliable across samples (Strobl et al., 2009). In short, we note that there are a
number of important advantages to a machine learning approach, including the use of training
and (unseen) holdout datasets to evaluate predictive accuracy, a reduced risk of overfitting, greater
model stability, and the fact that it requires fewer distributional assumptions.
To further support the current research, we have developed a web application that gives users the
ability to find their Six Americas segment by taking this short screener. It also allows researchers to
input their own survey data (with proper formatting) and receive a dataset with the respondents’
segments appended as output. (The application can be accessed here http://
climatecommunication.yale.edu/visualizations-data/sassy/.) We look forward to future research
and practice exploring the value of Global Warming’s Six Americas in predicting and explaining
how the public responds to and engages with the issue of climate change.
Notes
1. It should be noted that attitudes toward climate change in Western nations are structured differently than in
non-Western nations.
2. We also tested a six-item screener, incorporating introductory items: “Do you believe that global warming is
happening” and “assuming global warming is happening, do you believe it is human-caused?” The accuracy
of this model is similar and the relevant statistics can be found in the Appendix Tables 11 through 13.
3. Phone surveys are less likely to elicit “Don’t know” responses, which is an important identifier for the Disengaged segment.
4. An all-possible-subset regression using Dominance analysis (Budescu, 1993) yielded a similar 4-item ranking,
but this approach suffers from some of the same shortcomings (Matsuki, Kuperman, & Van Dyke, 2016).
Disclosure statement
No potential conflict of interest was reported by the authors.
References
Akerlof, K., Bruff, G., & Witte, J. (2011). Audience segmentation as a tool for communicating climate change. Park Sci,
28(1), 56–64.
Arbuckle, J., Hobbs, J., Loy, A., Morton, L. W., Prokopy, L., & Tyndall, J. (2014). Understanding corn belt farmer perspectives on climate change to inform engagement strategies for adaptation and mitigation. Journal of Soil and
Water Conservation, 69(6), 505–516.
Ashworth, P., Jeanneret, T., Gardner, J., & Shaw, H. (2011). Communication and climate change: What the australian
public thinks (Report No. EP112769). Canberra: CSIRO Publishing.
BBC Media Action (2013). Climate asia methodology series. Regional segment profiles. London: BBC Media Action.
1118
B. CHRYST ET AL.
Bekkar, M., Djemaa, H. K., & Alitouche, T. A. (2013). Evaluation measures for models assessment over imbalanced
datasets. Journal of Information Engineering and Applications, 3(10), 27–38.
Bowers, A. W., Monroe, M. C., & Adams, D. C. (2016). Climate change communication insights from cooperative
extension professionals in the us southern states: Finding common ground. Environmental Communication, 10
(5), 656–670.
Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in
multiple regression. Psychological Bulletin, 114(3), 542.
Cook, J., Oreskes, N., Doran, P. T., Anderegg, W. R., Verheggen, B., & Verheggen, E. W., et al. (2016). Consensus on
consensus: A synthesis of consensus estimates on human-caused global warming. Environmental Research Letters,
11(4), 048002.
Costello, C. (2014). Why are we still debating climate change. CNN. com. Retrieved from http://www.cnn.com/2014/
02/24/opinion/costello-debate-climate-change/.
Cranmer, S., Gill, J., Jackson, N., Murr, A., & Armstrong, D. (2016). hot.deck: Multiple hot-deck imputation
[Computer software manual]. Retrieved from https://CRAN.R-project.org/package=hot.deck (R package version
1.1).
Cranmer, S. J., & Gill, J. (2013). We have to be discrete about this: A non-parametric imputation technique for missing
categorical data. British Journal of Political Science, 43(02), 425–449.
Detenber, B., Rosenthal, S., Liao, Y., & Ho, S. S. (2016). Climate and sustainability—audience segmentation for campaign design: Addressing climate change in singapore. International Journal of Communication, 10, 23.
Ding, D., Maibach, E. W., Zhao, X., Roser-Renouf, C., & Leiserowitz, A. (2011). Support for climate policy and societal
action are linked to perceptions about scientific agreement. Nature Climate Change, 1(9), 462–466.
Doherty, K. L., & Webler, T. N. (2016). Social norms and efficacy beliefs drive the alarmed segment [rsquor] s publicsphere climate actions. Nature Climate Change, 6(9), 879–884.
Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861–874.
Fisher, D. (2014). Global warming theories + political reality = tesla subsidies. Forbes.com. Retrieved from https://
www.forbes.com/sites/danielfisher/2014/04/05/global-warming-theories-political-reality-tesla-subsidies/
77f365976be3.
Flora, J. A., Saphir, M., Lappé, M., Roser-Renouf, C., Maibach, E. W., & A. A. Leiserowitz (2014). Evaluation of a
national high school entertainment education program: The alliance for climate education. Climatic Change, 127
(3–4), 419–434.
Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29(5),
1189–1232.
Harris, P., Lock, A., Phillips, J. M., Reynolds, T. J., & Reynolds, K. (2010). Decision-based voter segmentation: An
application for campaign message development. European Journal of Marketing, 44(3), 310–330.
Hine, D. W., Reser, J. P., Morrison, M., Phillips, W. J., Nunn, P., & Cooksey, R. (2014). Audience segmentation and
climate change communication: Conceptual and methodological considerations. Wiley Interdisciplinary Reviews:
Climate Change, 5(4), 441–459.
Kelly, L. A. D., Luebke, J. F., Clayton, S., Saunders, C. D., Matiasek, J., & Grajal, A. (2014). Climate change attitudes of
zoo and aquarium visitors: Implications for climate literacy education. Journal of Geoscience Education, 62(3), 502–
510.
Kleinbaum, D. G., & Klein, M. (2010). Introduction to logistic regression. In Logistic regression (pp. 1–39). Springer.
Kuhn, M. (2008). Caret package. Journal of Statistical Software, 28(5), 1–26.
Leiserowitz, A. A. (2005). American risk perceptions: Is climate change dangerous?Risk Analysis, 25(6), 1433–1442.
Leiserowitz, A. A., Feinberg, G., Howe, P., & Rosenthal, S. (2013). Climate change in the coloradan mind: July 2013.
Leiserowitz, A. A., Maibach, E., Roser-Renouf, C., Feinberg, G., & Rosenthal, S. (2016). Climate change in the american
mind: March, 2016. Yale University and George Mason University. New Haven, CT: Yale Program on Climate
Change Communication.
Leiserowitz, A. A., Thaker, J., Feinberg, G., & Cooper, D. (2013). Global warming’s six indias. New Haven, CT: Yale
University.
Magidson, J., & Vermunt, J. K. (2002). Latent class modeling as a probabilistic extension of k-means clustering. Quirk’s
Marketing Research Review, 20, 77–80.
Maibach, E. W., Leiserowitz, A., Roser-Renouf, C., & Mertz, C. (2011). Identifying like-minded audiences for global
warming public engagement campaigns: An audience segmentation analysis and tool development. PloS one, 6
(3), e17571.
Maibach, E. W., Weber, D., Massett, H., Hancock, G. R., & Price, S. (2006). Understanding consumers’ health information preferences development and validation of a brief screening instrument. Journal of Health Communication,
11(8), 717–736.
Malka, A., Krosnick, J. A., & Langer, G. (2009). The association of knowledge with concern about global warming:
Trusted information sources shape public thinking. Risk Analysis, 29(5), 633–647.
Matsuki, K., Kuperman, V., & Van Dyke, J. A. (2016). The random forests statistical technique: An examination of its
value for the study of reading. Scientific Studies of Reading, 20(1), 20–33.
ENVIRONMENTAL COMMUNICATION
1119
Metag, J., Füchslin, T., & Schäfer, M. S. (2015). Global warming’s five germanys: A typology of germans’ views on climate change and patterns of media use and information. Public Understanding of Science, doi:09636625155925580.
Michie, D., Spiegelhalter, D. J., & Taylor, C. C. (1994). Machine learning, neural and statistical classification. Upper
Saddle River, NJ: Prentice Hall.
Morrison, M., Duncan, R., Sherley, C., & Parton, K. (2013). A comparison between attitudes to climate change in australia and the united states. Australasian Journal of Environmental Management, 20(2), 87–100.
Myers, T. A. (2011). Goodbye, listwise deletion: Presenting hot deck imputation as an easy and effective tool for handling missing data. Communication Methods and Measures, 5(4), 297–310.
Myers, T. A., Nisbet, M. C., Maibach, E. W., & Leiserowitz, A. A. (2012). A public health frame arouses hopeful
emotions about climate change. Climatic Change, 113(3–4), 1105–1112.
Natekin, A., & Knoll, A. (2013). Gradient boosting machines, a tutorial. Frontiers in Neurorobotics, 7, 21.
Noar, S. M., Benac, C. N., & Harris, M. S. (2007). Does tailoring matter? meta-analytic review of tailored print health
behavior change interventions. Psychological Bulletin, 133(4), 673.
Pachauri, R. K., Allen, M. R., Barros, V. R., Broome, J., Cramer, W., & Christ, R., et al. (2014). Climate change 2014:
Synthesis report. Contribution of Working Groups I, II and III to the fifth assessment report of the Intergovernmental
Panel on Climate Change. IPCC.
Pentreath, N. (2015). Machine learning with spark. Birmingham: Packt Publishing Ltd.
Rickert, J. (2016). Computing classification evaluation metrics in R (Blog No. March 11). Retrieved from http://blog.
revolutionanalytics.com/2016/03/com_class_eval_metrics_r.html.
Ridgeway, G. (2007). Generalized boosted models: A guide to the gbm package. Update, 1(1), 2007.
Roser-Renouf, C., Stenhouse, N., Rolfe-Redding, J., Maibach, E. W., & Leiserowitz, A. (2014). Engaging diverse audiences with climate change: Message strategies for global warming’s six Americas. In A. Hansen & R. Cox (Eds.), The
Routledge handbook of environment and communication (pp. 368–386). Abingdon: Routledge.
Schweizer, S., Davis, S., & Thompson, J. L. (2013). Changing the conversation about climate change: A theoretical framework for place-based climate change engagement. Environmental Communication: A Journal of Nature and
Culture, 7(1), 42–62.
Slater, M. D. (1996). Theory and method in health audience segmentation. Journal of Health Communication, 1(3),
267–284.
Smith, N., & Leiserowitz, A. (2014). The role of emotion in global warming policy support and opposition. Risk
Analysis, 34(5), 937–948.
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks.
Information Processing & Management, 45(4), 427–437.
Strobl, C., Malley, J., & Tutz, G. (2009). An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychological Methods, 14(4), 323.
Swim, J. K., & Geiger, N. (2017). From alarmed to dismissive of climate change: A single item assessment of individual
differences in concern and issue involvement. Environmental Communication, 11(4), 568–586.
Thompson, B. (1995). Stepwise regression and stepwise discriminant analysis need not apply here: A guidelines editorial.
Thousand Oaks, CA: Sage Publications.
van der Linden, S. (2017). Determinants and measurement of climate change risk perception, worry, and concern. In
M. Nisbet (Ed.), Oxford encyclopedia of climate change communication (Vol. 2, pp. 369–401). Oxford: Oxford
University Press.
van der Linden, S., Maibach, E., & Leiserowitz, A. (2015). Improving public engagement with climate change: Five ‘best
practice’ insights from psychological science. Perspectives on Psychological Science, 10(6), 758–763.
Venables, W. N., & Ripley, B. D. (2013). Modern applied statistics with s-plus. New York, NY: Springer Science &
Business Media.
Vermunt, J. K., & Magidson, J. (2002). Latent class cluster analysis. Applied Latent Class Analysis, 11, 89–106.
1120
B. CHRYST ET AL.
Appendix 1. Tables
Table A1. Confusion matrix of the 15-item Reduced Discriminant Model (RDM) segment fit and the 4 item multinomial-logistic
SASSY fit for the October 2015 survey.
Confusion Matrix October 2015 (RDM)
Observed
Model
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
0.81
0.19
0.00
0.00
0.00
0.00
0.10
0.76
0.13
0.01
0.00
0.00
0.00
0.09
0.82
0.00
0.08
0.00
0.00
0.04
0.02
0.76
0.18
0.00
0.00
0.01
0.14
0.04
0.78
0.04
0.00
0.00
0.00
0.00
0.00
1.00
Table A2. Confusion matrix of the 36 item segment fit and the 4 item SASSY fit for the March 2016 survey.
Confusion Matrix March 2016
Observed
Model
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
0.70
0.30
0.00
0.00
0.00
0.00
Concerned
0.11
0.76
0.12
0.00
0.00
0.00
Cautious
0.01
0.14
0.74
0.00
0.11
0.00
Disengaged
0.02
0.08
0.02
0.76
0.13
0.00
Doubtful
0.00
0.01
0.20
0.04
0.69
0.07
Dismissive
0.00
0.00
0.01
0.00
0.08
0.91
Table A3. Confusion matrix of the 15 item RDM segment fit and the 4 item SASSY fit for the November 2016 survey.
Confusion Matrix November 2016
Observed
Model
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
0.80
0.20
0.00
0.00
0.00
0.00
Concerned
0.07
0.78
0.13
0.01
0.00
0.00
Cautious
0.01
0.10
0.78
0.00
0.12
0.00
Disengaged
0.02
0.05
0.02
0.73
0.20
0.00
Doubtful
0.00
0.00
0.08
0.07
0.80
0.05
Dismissive
0.00
0.00
0.01
0.00
0.00
0.99
Appendix 2. Six question model
Table A4. Confusion matrix of the 36 item segment fit and the 4 item SASSY fit for a survey completed in Colorado only.
Confusion Matrix Colorado
Observed
Model
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
0.73
0.26
0.01
0.00
0.00
0.00
0.12
0.71
0.14
0.01
0.00
0.01
0.00
0.20
0.68
0.01
0.11
0.01
0.00
0.12
0.12
0.62
0.12
0.00
0.00
0.00
0.25
0.03
0.69
0.03
0.00
0.01
0.02
0.00
0.12
0.85
ENVIRONMENTAL COMMUNICATION
1121
Table A5. Confusion matrix of the first 4 item segment SASSY fit and the second 4 item segment SASSY fit for the test-retest data.
Confusion Matrix Test-Retest
First Fit
Second Fit
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
0.76
0.04
0.19
0.01
0.00
0.00
Concerned
0.06
0.59
0.16
0.04
0.04
0.10
Cautious
0.12
0.15
0.69
0.00
0.02
0.02
Disengaged
0.00
0.00
0.00
0.50
0.00
0.50
Doubtful
0.00
0.25
0.00
0.00
0.67
0.08
Dismissive
0.00
0.25
0.00
0.00
0.30
0.45
Table A6. Confusion matrix of the 36 item segment fit and the 4 item SASSY fit for the 20% hold out data set.
Confusion Matrix Test Set; Six Question Model
Observed
Model
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
0.72
0.27
0.00
0.00
0.00
0.00
Concerned
0.08
0.80
0.11
0.02
0.00
0.00
Cautious
0.00
0.14
0.77
0.01
0.08
0.00
Disengaged
0.01
0.05
0.00
0.83
0.11
0.00
Doubtful
0.00
0.00
0.10
0.05
0.76
0.09
Dismissive
0.00
0.00
0.01
0.00
0.11
0.89
Table A7. Confusion matrix of the 36 item segment fit and the 4 item SASSY fit for the March 2016 survey.
Confusion Matrix March 2016 Survey; Six Question Model
Observed
Model
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
Alarmed
Concerned
Cautious
Disengaged
Doubtful
Dismissive
0.69
0.30
0.00
0.00
0.00
0.00
0.10
0.81
0.09
0.00
0.00
0.00
0.01
0.14
0.78
0.00
0.06
0.00
0.00
0.08
0.02
0.81
0.09
0.00
0.00
0.00
0.14
0.03
0.75
0.08
0.00
0.00
0.01
0.00
0.07
0.92
1122
B. CHRYST ET AL.
Table A8. Coefficients are listed as odds-ratios.
Coefficients Multinomial-logistic 6 Question Model
(Intercept)
How much do you think global warming will harm future generations of people?
How important is the issue of global warming to you personally?
How worried are you about global warming?
How much do you think global warming will harm you personally?
Assuming global warming is happening, do you think it is caused by …
Do you think global warming is happening?
A great deal
A mod. amt.
Not at all
Only a little
Extremely
Not too imp.
Somewhat
Very
Not very
Somewhat
Very
A great deal
A mod. amt
Not at all
Only a little
human activities and natural changes
mostly human activities
mostly natural changes in the environment
global warming isn’t happening
no
yes
Conc.
152***
Caut.
4410***
0.42***
1.62
3.37
0.39
0.06***
23.20**
2.70
0.30
1.13
0.57
0.17*
0.24***
0.60***
1.29
0.66
1.50
1.68*
8.61*
0.87
61.63
0.12***
0.20***
17.48***
98.35**
16.59***
0.00***
15.27*
0.30
0.01***
0.61
0.06***
0.01***
0.88
1.30
12.54***
3.30***
0.96
1.22***
39.62***
2.61
103.84
0.01***
Diseng.
469864***
Doubt.
815591***
Dismis.
18108***
0.00***
0.00***
0.11
0.05**
0.00***
13.46*
0.39
0.01***
0.58
0.04***
0.01***
0.04***
0.05***
0.14***
0.06***
0.34
0.92**
26.97**
3.09
82.20
0.01***
0.01***
0.71
62.47**
9.41**
0.00***
6.62
0.04***
0.00***
0.12*
0.00***
0.00***
0.66
0.49
12.81***
1.01
0.13
0.14**
47.74**
4.00
701.02
0.01***
0.01***
0.36
2926.8***
6.36*
0.00***
4.46
0.02***
0.00***
0.04***
0.00***
0.00***
1.60
1.44
54.05***
2.46
0.01**
0.08**
69.84**
16.11
2947.35
0.00***
Notes: Reference Category: Alarmed. Reference response by question: Don’t know; Not at all important; Not at all worried; Don’t know. * Significant at the 0.05 level. ** Significant at the 0.01 level. ***
Significant at the 0.001 level.
UC Berkeley
CUDARE Working Papers
Title
How California Came to Pass AB 32, the Global Warming Solutions Act of 2006
Permalink
https://escholarship.org/uc/item/1vb0j4d6
Author
Hanemann, W. Michael
Publication Date
2007-03-01
eScholarship.org
Powered by the California Digital Library
University of California
DEPARTMENT OF AGRICULTURAL AND RESOURCE ECONOMICS AND POLICY
DIVISION OF AGRICULTURE AND NATURAL RESOURCES
UNIVERSITY OF CALIFORNIA AT BERKELEY
WORKING PAPER NO. 1040
How California Came to Pass AB 32,
the Global Warming Solutions Act of 2006
by
W. M. Hanemann
California Agricultural Experiment Station
Giannini Foundation of Agricultural Economics
March 2007
How California Came to Pass AB 32, the Global Warming Solutions
Act of 2006
W. Michael Hanemann
Department of Agricultural & Resource Economics, and
California Climate Change Center
Goldman School of Public Policy
University of California, Berkeley
March, 2007
1. Introduction
On August 31 2006, the last day of the legislative session, the California Legislature passed
AB 32, the California Global Warming Solutions Act of 2006. AB 32 requires that California’s
statewide greenhouse gas (GHG) emissions be reduced to the 1990 level by 2020. Based on the
current understanding, this is a reduction of about 25%. 1
AB 32 is noteworthy because it legislates a more comprehensive and stringent control on GHG
emissions than exists in any other state. Eleven other states have set GHG emission reduction
targets, nine of them with more stringent targets for 2020 than that set by Governor
Schwarzenegger. But, in the other states, the target is either not legally binding or it has a
narrower focus and is less stringent. The only existing binding cap on GHG emissions outside
California is the Regional Greenhouse Gas Initiative (RGGI), a coalition of seven east coast
states (CT, DE, ME, NH, NJ, NY and VT). But, RGGI focuses solely on emissions from electric
power generation, which account for only about 30% of those states’ total GHG emissions.
Moreover, the RGGI cap is less stringent than AB32: under RGGI, power plant emissions will be
capped at approximately current (2006) levels in 2009, and will have to be reduced 10% below
the 2006 level by 2019.
The purpose of this paper is to explain how AB 32 came to be enacted, focusing
particular attention on the political and legal background in California.
The passage of AB 32 came about partly through an accidental concatenation of
circumstances and partly through the momentum created by previous policy actions in
California. Chance events explain why the legislation happened to occur at this particular time,
but the context for the legislation and the reason why it was widely supported come from
California’s previous experience of using state legislation to regulate automobile air pollution
and promote energy efficiency. One needs to know this regulatory history in order to understand
how AB 32 came to be passed. This history is summarized in Section 2. Section 3 covers the
period between 1988 and 2002 when California took the first actions to control its GHG
emissions. Section 4 covers the key period — roughly between the summers of 2004 and 2005 -when California became committed to reducing its GHG emissions. Section 5 describes the
events between the summers of 2005 and 2006 when this commitment was turned in state law.
Section 6 offers some concluding observations.
2. Californian Exceptionalism in Air Pollution and Energy Efficiency Regulation
California first came to take an interest in air pollution because of smog in Los Angeles
in the early 1940s. The population of Los Angeles had grown from 170,000 in 1900 to 2.8
million in 1940, and over 1.2 million motor vehicles were registered in the county, one vehicle
for every 2.3 people. In 1941, Los Angeles first experienced a heavy, acrid haze. This recurred in
1
On June 1, 2005, as explained below, Governor Schwarzenegger had signed Executive Order S-3-05 setting
statewide GHG emission reduction targets for 2010, 2020, and 2050. AB 32 takes the Governor’s 2020 target and
makes it legally binding, with a specified procedure and timetable for implementation.
3
1943, reducing visibility to three city blocks. In 1945, the city passed an ordinance setting limits
on industrial smoke emissions (which were thought to be the cause) and established an air
pollution control unit within the municipal Health Department. In 1947, the state passed a law
authorizing the creation of county-level Air Pollution Control Districts (APCD) and the Los
Angeles County APCD was formed, the first of its kind in the nation.
In 1950, research by Professor A. J. Haagen-Smit at Cal Tech finally identified the problem as
a photochemical reaction converting certain pollutants – primarily from refineries and motor
vehicles – into smog. By 1955, after some controversy fueled by industry opponents, this finding
had been confirmed and motor vehicle emissions were established as the primary factor. That
year, LAAPCD formed a Motor Vehicle Pollution Control Laboratory. 2 In 1959, California
passed a law requiring the State Department of Public Health to establish air quality standards
and controls for motor vehicles. In 1960, the Motor Vehicle Pollution Control Board was
established to test and certify devices for installation on cars sold in California. In 1961 the
Department of Public Health mandated positive crankcase ventilation on new vehicles sold in
California starting in 1963, the first emission controls in the nation. 3 In 1966, the Motor Vehicle
Board adopted tailpipe emission standards for hydrocarbons and carbon monoxide, and the
California Highway Patrol began random inspections of vehicle smog control devices. In 1967, a
unified regulatory agency, the California Air Resources Board, was created combining the Motor
Vehicle Board along with units from the State Department of Health; the Board’s founding chair
was Professor Haagen-Smit.
1967 is when the federal government finally began to deal with emissions from motor
vehicles. In 1965, faced with the prospect of state emissions controls in Pennsylvania and New
York, in addition to California, the automobile industry agreed to support national standards for
automobile emissions. That year, Congress passed the Motor Vehicles Air Pollution Control Act
which called on the Department of Health, Education and Welfare (HEW) to develop emissions
standards for new vehicles, taking into consideration the technological feasibility and economic
cost of compliance. In 1967, HEW responded with a proposal for a Clean Air law covering both
motor vehicles and stationary source. A major issue was whether California should be allowed to
impose controls more stringent than the national standard, as California’s representatives urged.
After a fierce battle, California got its way: the final legislation granted California alone a special
waiver in deference to its “unique problems and pioneering efforts.” The Administrator
overseeing implementation of air pollution standards is to waive federal preemption upon
application by California for such a waiver provided that the application is not arbitrary and
capricious, it is at least as stringent as the national standard, and it is needed to meet California’s
“compelling and extraordinary conditions.” The 1977 Amendments to the Clean Air Act
reinforced California’s independence by creating a similar waiver from federal emission
standards for non-road vehicles, and by permitting California to prescribe fuel or fuel additive
requirements without needing EPA approval. The Amendments also established a “piggyback”
provision allowing other states, if they so chose, to adopt the California standards once these
have received the formal EPA approval.
2
3
A highly informative history of the discovery and measurement of ozone is provided by Farrell (2005).
The automobile industry voluntarily decided to implement the crankcase controls nationally.
4
Congress’ willingness to grant California this degree of latitude despite fierce industry
opposition reflects its appreciation of California’s role as “a kind of laboratory for innovation” in
emission control technology and regulation. 4 Since 1967, California has made use of its federal
exemption on at least 14 occasions to pioneer innovations in the regulation of motor vehicle
emissions, including the first introduction of NOx standards for cars and light trucks (1971),
heavy-duty diesel truck standards (1973), two-way catalytic converters (1975) “unleaded”
gasoline (1976), low-emission vehicles (LEV) program (1994 and 1998), zero emission vehicles
(1990) and evaporative emissions standards and test procedures (1999). The LEV program is the
primary California emissions standard adopted by other states. It originated from the California
Clean Air Act of 1988 which instructed CARB to “achieve the maximum degree of emission
reduction possible from vehicular and other mobile sources.” In response. CARB approved an
ambitious new program in 1990that would substantially reduce emissions from light- and
medium-duty vehicles starting in model-year 1994. Rather than requiring every vehicle to meet
the same emission standard, the LEV program introduced a fleet-based approach.
The key to California’s independent role in emission regulation is the California Air Resources
Board (CARB). Although the eleven members of the Board serve at the pleasure of the
Governor, the Board has a reputation for political independence, and its staff have a reputation
for scientific and technical competence. CARB sponsors peer-reviewed research and its
regulatory actions are typically proceeded by a carefully organized program of scientific and
engineering research as well as a transparent public process. It is widely regarded as a model of
an aggressive, independent, science-based regulatory agency.
California’s involvement in energy efficiency regulation began in the early 1970’s. At
that time, the state’s electric utilities were projecting an unending growth in demand; to meet
this, they were planning to construct a large number of new nuclear power plants. This was
opposed by environmentalists who felt that the demand forecasts were overblown, conservation
was deliberately being ignored, and nuclear power was both more expensive and less
environmentally benign than the utilities had represented. They felt, also, that the California
Public Utility Commission (CPUC), which focused narrowly on rate regulation, was not doing
an adequate job of dealing with the larger issues of energy supply and demand in California. To
remedy this, the Democrat-controlled legislature passed a bill in 1973 to create an Energy
Commission that would forecast energy demand, assess efforts to reduce this demand through
conservation and efficiency, and provide a consolidated approval process for the siting of new
power plants. The legislation was vetoed by then Governor Reagan. Within a few months,
however, the OPEC oil embargo occurred, creating an energy shortage and raising energy prices.
At Governor Reagan’s request, a nearly identical bill to the one he had vetoed was passed by the
legislature, the Warren-Alquist Act, and this time he signed it into law in May 1974.
4
The quotation is from the DC Circuit Court’s ruling in Motor and Equipment Manufacturers Association, Inc. v.
EPA, 627 F.2d (1979).
5
The resulting California Energy Commission (CEC) has four main mandates: 5 (1)
Facility siting and environmental protection: CEC has exclusive power to certify thermal power
plants of 50 MWh or larger to meet statewide energy needs; (2) Energy forecasting and planning:
CEC is required to forecast future statewide energy needs, evaluate supply options for meeting
those needs, and more generally develop and implement an energy policy for California; (3)
Energy efficiency and conservation: CEC is empowered to establish building and appliance
efficiency standards, and is required to promote conservation through research and public
education programs and grant and loan programs; and (4) Technology development: CEC funds
research, development and demonstration programs for technologies using renewable,
alternative, and cleaner energy, including transportation fuels. 6
A separate development occurring at that time, stimulated by the same forces that had led to
the formation of the CEC, was the rise of energy efficiency as a subject of academic and
scientific study. In 1971, UC Berkeley had created an interdisciplinary graduate program, the
Energy and Resources Group (ERG). Two years later, the oil embargo stimulated a number of
physicists to start thinking about the physics of energy use and energy efficiency. The American
Physical Society sponsored a study of energy efficiency for the summer of 1974, which led to the
production of a landmark text, Efficient Use of Energy (1975). Several of the authors were at
Berkeley, in the Physics Department, ERG, or the adjacent Lawrence Berkeley National
Laboratory (LBNL); one of leaders, Art Rosenfeld, was a physics professor at Berkeley working
in the particle physics program at LBNL. He decided to sponsor a summer study in 1975 at the
Berkeley School of Architecture on energy-efficient buildings covering lighting, windows, and
heating, ventilation and air-conditioning equipment. 7 Meanwhile, one of CEC’s first actions in
1975 was to draft building energy-efficiency performance standards; however, the draft
regulations were based on crude and oversimplified model of heat flow within a building. The
two groups decided to join forces. Rosenfeld and his colleagues used their research to develop an
improved software program which CEC took as the basis for reformulating what became known
as “Title 24” building standards issued in 1977. This established a symbiotic relationship
between CEC and the research community which has flourished for 30 years. CEC funds
research by academic scientists and engineers which establishes a rigorous foundation for energy
efficiency regulations that CEC subsequently promulgates.
In the case of appliances, how CEC came to issue the first energy efficiency standards is
described by Rosenfeld (1999): “In 1976 Governor Jerry Brown was looking for a way to
5
The CEC’s authority covers not just investor-owned but also municipal utilities within California.
A somewhat similar agency was created in New York in 1975: the New York State Energy Research and
Development Authority (NYSERDA) has mandates corresponding to the CEC’s mandates (3) and (4), but not to (1)
or (2).
7
This account is based on Rosenfeld (1999). What became the Center for Building Science and the Energy &
Environment Division at LBL, and the UC California Institute for Energy Efficiency, are outgrowths of this research
effort. Rosenfeld co-founded the California Institute for Energy Efficiency and also the American Council for an
Energy Efficient Economy, and he founded and directed the Center for Building Science until 1994. In 2000 he was
appointed one of the five California Energy Commissioners by Governor Davis, and his appointment was renewed
for another five years by Governor Schwarzenegger in 2005.
6
6
disapprove Sundesert, the only still pending application for a 1-GW nuclear power plant. The
Title 24 standard for buildings was an accepted idea, but somehow standards for appliances
seemed more like a federal responsibility, so appliance standards were still controversial. David
Goldstein and I [Rosenfeld] then discovered that there was absolutely no correlation between
refrigerator retail price and efficiency, although we controlled for every feature we could
imagine. … I pointed out to Governor Brown that California refrigerators were already using the
output of 5 Sundeserts, and that even minimal standards would avoid the need for 1.5 Sundeserts,
at no additional cost. Brown promptly called Energy Commissioner Gene Varanini, who
corroborated our claim. After that, standards for new refrigerators and freezers were developed
quickly and put into effect in 1977.” Over the next seven years, CEC followed up with appliance
efficiency standards for fluorescent lamp ballasts, various air conditioning products, heat pumps,
furnaces, boilers, wall heaters, and showerheads, and faucets.
Throughout this period, the federal government was relatively inactive with regard to
appliance standards. 8 The initial federal response to the oil embargo had been to call for
voluntary targets for appliance efficiency. This was soon overtaken by the mandatory appliance
efficiency standards being imposed by California, New York and some other states. The Carter
Administration subsequently proposed mandatory federal standards and Congress ultimately
agreed; the 1978 National Energy Conservation and Policy Act directed the US Department of
Energy (DOE) to formulate mandatory efficiency standards for appliances. However, this was
opposed by the Reagan Administration which instead proposed a “no standard” standard. The
Reagan standard was overturned by the federal courts in 1985. By 1986, six states had adopted
standards on one or more products and appliance manufacturers were coming around to the
notion that a pre-emptive federal standard would better serve their interests than the expanding
patchwork of individual state standards. A compromise was reached, embodied in the 1987
National Appliance Energy Conservation Act (NAECA), whereby Congress would adopt
specific standards on many major appliances, with the provision that these federal standards
would then preempt any state standards. However, the states are left free to adopt efficiency
standards for products not covered by federal standards. Subsequent moves by states to adopt
standards for products not covered by NAECA led to the passage of federal legislation in 1988
establishing efficiency standards for fluorescent lamp ballasts, and in 1992 to standards on a
variety of lamps, electric motors, and commercial heating and cooling products.
The 1988 federal standard for fluorescent lamp ballasts merely replicated the standard that
California had set in 1978, and something similar was true of most of the other federal appliance
efficiency standards. Under NAECA, DOE is required to periodically review and revise its
efficiency standards, but this has generally occurred at a rather sluggish pace. Meanwhile, states
including California have continued to innovate with efficiency standards for products not
subject to DOE standards. In December 2004, for example, CEC set new energy efficiency
standards for 17 different products ranging from light bulbs to swimming pool pumps to small
power supplies for electronics; it is estimated that the new standards will save approximately 100
8
The account that follows is drawn from Nadel (2002).
7
MW of new generating capacity in California every year. 9 More generally, the CEC’s activism
in promoting energy conservation has had a significant impact on electricity use in California
over the past three decades. Since 1975, electricity use per capita in California has not increased
at all, whereas it has increased nationally by about fifty percent. This is widely seen as a
consequence of CEC’s effectiveness in regulating energy efficiency.
California’s willingness to act unilaterally in regulating motor vehicle emissions and energy
efficiency, and the contrast between what it sees as the regulatory diligence and effectiveness of
CARB and CEC compared to sluggish national efforts by EPA and DOE, are a crucial backdrop
to the recent legislation on global warming.
3. Climate Change Policy Comes to California
Climate change first surfaced as a policy issue in California in 1988, when the legislature
passed AB 4420, introduced by Assemblyman Byron Sher. This called for the compilation of an
inventory of GHG emissions from all sources in California. It also requested an assessment of
how global warming trends may affect the state’s energy supply and demand, economy,
environment, agriculture and water supplies. In addition to the assessment, it requested
recommendations for policies to avoid, reduce, and address the impacts. The CEC was
designated as the lead state agency for performing these tasks. In response, CEC prepared a
series of reports, including a report on the impacts of global warming on California (CEC, 1989),
1988 Inventory of California Greenhouse Gas Emissions (CEC, 1990) and Global Climate
Change: Potential Impacts and Policy Recommendations (CEC, 1991). Recommendations
included: promoting renewable electricity generation and biomass-based fuels, promoting energy
efficiency, reducing vehicle miles traveled and expanding land use planning to manage
transportation demand, and improving forestry, solid waste and recycling, and livestock
management. In 1997, with funding from US EPA, CEC updated both the emissions inventory
and the report on emission reduction strategies for California. (CEC, 1998).
At this time California was restructuring and partially deregulating its electricity industry.
One of the Legislature’s concerns was that deregulation would undercut the strong energyefficiency programs that investor-owned utilities had conducted at the behest of the CPUC. To
counteract this, the restructuring law, AB 1890, included a proviso for the assessment of a
“public good” surcharge on electric utility bills in California for the specific purpose of funding
energy efficiency programs (to be administered by CPUC), renewable energy (to be administered
9
CEC press release, 12/15/04. In addition, CEC had issued new building efficiency standards in 2001 and 2003, and
other appliance efficiency standards in 2002. The following is an example of how things work. DOE issues energy
efficiency standards for consumer products, but not for commercial products, and not water efficiency standards. In
2001, DOE issued new energy efficiency standards for residential clothes washers. In 2002, CEC adopted more
stringent energy and water efficiency standards for commercial clothes washers (not federally regulated). Later that
year, the legislature passed AB 1561 requiring CEC to establish water efficiency standards for residential clothes
washers at least equal to those for commercial clothes washers. CEC adopted the water efficiency standards for
residential clothes washers in September 2004.
8
by CEC), and “public interest energy research” (PIER) to be administered by a new division
within the CEC. The PIER fund, about $62 million/yr, is spent on six program areas, one of
which — energy-related environmental research — includes climate change. In 1999, CEC’s
PIER program sponsored a workshop on Global Climate Change Science. The following year,
PIER contracted with the Electric Power Research Institute (EPRI) to conduct a coordinated
suite of studies on the potential impacts of climate change on California, including individual
studies of impacts on terrestrial vegetation, ecosystems, water resources, agriculture, energy,
timber, coastal resources and human health (Wilson et al., 2003).
Meanwhile in 1998, then Senator Byron Sher introduced a bill, SB 1941, which required
the CEC to establish an inventory of GHG emissions in California; to provide information to
state, regional and local agencies on cost-effective and technically feasible methods for reducing
those emissions; and to convene an interagency task force to ensure that policies affecting
climate change are coordinated at the state level. The bill was passed by the legislature but was
vetoed by Governor Wilson as unnecessary because the CEC had recently updated the inventory
of GHG emissions in California. In addition, the Governor found: “[T]he bill’s requirement that
the CEC provide information to state, regional, and local agencies on cost-effective and
technologically feasible options to reduce greenhouse gases is infeasible. Because uncertainty
exists about the effects that reducing greenhouse gas emissions in California would have on
global warming trends, there is no way to determine how one particular measure implemented in
California would have a more positive or negative consequence than any other measure.” 10
In 1999, Sher re-introduced a similar bill, SB 1253; but it was not passed. At one of the
hearings, business representatives suggested that he consider non-regulatory methods by which
businesses could be encouraged to reduce GHG emissions voluntarily. Sher incorporated this
recommendation in a subsequent bill, SB 1771, which was passed and signed into law in 2000.
SB 1771 establishes an independent organization, the California Climate Action Registry, as a
public benefit nonprofit corporation to record and register voluntary GHG emission reductions
that have been achieved since 1990. The Climate Registry is required to adopt standards for
verifying emission reductions, adopt a list of approved auditors to verify emission reductions,
establish emission reduction goals, maintain a record of all emission baselines and reductions,
and recognize, publicize and promote entities that participate in the Registry. 11
Besides the Registry, SB 1771 contains the provisions pertaining to the CEC that were in
Sher’s previous unsuccessful bills. The CEC is directed to update, by January 2002, the
inventory of GHG emissions from all sources in California, and to update this every five years
thereafter. It is also directed to acquire and develop information on global climate change and to
provide information to state, regional and local agencies on the costs, technical feasibility and
10
The same argument was commonly made by opponents of AB 32 during the summer of 2006.
In addition to SB 1771, there were two subsequent laws relating to the Climate Registry, both authored by Sher.
In October 2001, SB 527 was a cleanup bill clarifying some details of SB 1771. In September 2002, SB 812
instructed the Registry to include forest management practices as a mechanism for emission reduction and directed it
to adopt procedures and protocols for the reporting and certification of carbon stocks and carbon emission
reductions associated with forest sequestration and reforestation.
11
9
demonstrated effectiveness of methods for reducing GHG emissions from in-state sources.
Finally, it is directed to convene a interagency task force of state agencies to ensure policy
coordination, and to establish a Climate Change Advisory Committee with representatives of
business, agriculture, local government and environmental groups, to be chaired by a CEC
Commissioner.
Another climate-related law authored by Sher, SB 1170, passed in 2001, required CEC,
CARB and the California Department of General Services to develop and adopt fuel-efficiency
specifications governing the state’s purchase of motor vehicles and replacement tires, including
ultra-low and zero emission vehicles, with the goal of reducing the state fleet’s fuel consumption
by 10% by 2005. Also, enacted that year was AB 2076; reflecting public concerns about the
volatility of the price of gasoline, supply shortages and the frequency of refinery outages, this
required the CEC and CARB to prepare a report on how California might in future reduce its
petroleum dependence.
By 2002, the situation was as follows. During 2001, the Registry had developed its
reporting and certification protocols, gathered charter members, and built its on-line reporting
software. 12 It opened for business with 23 charter members in October 2002 (California Climate
Registry, 2003). The CEC, as the lead state agency for climate change, had issued an updated
inventory of GHG emissions (CEC 2002), the PIER-EPRI integrated climate impact assessment
study was being wrapped up, and PIER was developing roadmap documents for future follow-up
impact assessment analysis (CEC, 2003). In addition, PIER was actively funding research on
carbon sequestration and other climate change topics. But, while there was substantial activity on
climate change in California, the focus was predominantly on the generation of information,
advice and guidance – not on regulation.
This changed significantly with the enactment of AB 1493, requiring CARB to adopt
regulations to reduce GHG emissions by new motor vehicles sold in California. The bill was
introduced originally as AB 1058 at the beginning of the legislative session in February 2001 by
a freshman legislator, Fran Pavley, from Southern California. The initial text of the bill was
extremely terse: “No later than January 1, 2003, [CARB] shall develop and adopt regulations that
achieve the maximum feasible reduction of carbon dioxide emitted by passenger vehicles and
light-duty trucks in the state.” It immediately drew opposition from General Motors, the
Alliance of Automobile Manufacturers and the California Chamber of Commerce, who argued
that it was equivalent to the regulation of fuel economy, which is preempted by the federal law;
that, since CO2 does not create localized pollution problems, it cannot be eligible for state
regulation; and that it would be better to encourage consumers to embrace more fuel-efficient
technologies through incentives rather than through command-and-control (Posner, 2001). As
industry opposition intensified, the bill was modified and fleshed out, and the deadline for
implementation was pushed back. By May 2002, for example, CARB was to issue regulations by
January 2005 that would apply only to model year 2009 or later vehicles; CARB was to consider
technological feasibility and economic impact; the regulations were to provide flexibility in the
12
The Registry’s protocols are widely considered the gold standard by other registries in the US and abroad.
10
means by which compliance was to be achieved; and the regulations were prohibited from
including mandatory trip reductions, land use restrictions, or bans on any specific category of
vehicle. The latter is explained by the fact that industry opponents were characterizing this as an
“anti-SUV, anti-minivan” bill. But, in the face of a fierce media blitz by automobile industry
opponents, the bill was stalled. 13
The bill was revived through a parliamentary maneuver on Saturday June 29, when the
Senate was meeting in a special session on the state budget. An existing, unrelated bill, AB 1493,
was gutted by the Democratic leadership and amended to bear the content of AB 1058. This was
passed by the Assembly with a bare majority on a mostly party-line vote. It was passed by the
Senate that same day, and Governor Davis signed it into law a few weeks later. The final text
included language to prevent CARB from adopting regulations that impose additional fees and
taxes on motor vehicles, on fuels, or on vehicle miles traveled; that impose reductions in vehicle
weights or speed limit reductions or limitations; or that impose vehicle-miles-traveled
restrictions or limitations. The specific standards that would achieve “the maximum feasible and
cost-effective reduction” of GHG emissions from motor vehicles, and how much reduction this
was, were to be worked out by CARB subject to the constraints embodied in law. The draft
regulations were to be promulgated in June 2004, and the final regulations were to be adopted by
CARB by the end of 2004. The regulations would not take effect before January 1, 2006, in
order to give the legislature sufficient time to review them and amend them if it so chose.
A few months later, in September 2002, the legislature passed SB 1078, introduced by
Senator Sher, which requires California to generate 20% of its electricity from renewable energy
no later than 2017, the most stringent renewable portfolio standard in the nation. California was
already generating about 10% of its electricity by renewables; the new law requires retail sellers
of electricity to increase their use of renewable energy by 1% per year. It also requires the CPUC
to adopt rules for establishing a process for determining market prices of electricity from
renewable generators, and it requires CEC to certify eligible renewable energy resources.
4. 2004-5: Annus Mirabilis for Climate Change Policy
The immediate precursor of California’s 2006 GHG law is a series of events that were set
in motion in 2003, and that crystallized between June 2004 and May 2005. The single most
important event was the election of Governor Schwarzenegger in a special, recall election against
Governor Davis, held in October 2003, but there were some other influential developments
earlier that year.
In April 2003 the CEC and CPUC jointly issued a report, Energy Action Plan for the
State of California. In the aftermath of the 2000-2001 electricity crisis, the state’s principal
13
Among the opposition’s arguments was: “[C]arbon dioxide is not a pollutant, and California’s contribution to
global CO2 is minimal. There is as yet no technology to reduce CO2 emissions. All that can be done is to restrict
driving or mandate lighter vehicles.” (Billingsley, 2002)
11
energy agencies had come together to prepare a plan with specific goals and actions that would
eliminate future energy outages and excessive price spikes in electricity or natural gas. The plan
identified increased energy efficiency and price-based demand response as the state’s preferred
future energy resource. It called for reduced per capita electricity use both to save energy and to
minimize emission of pollutants including greenhouse gasses, and it recommended accelerating
the 20% renewable resource goal from 2017 to 2010. The same theme was echoed in two other
reports that year. In August, CEC and CARB produced a joint report Reducing California’s
Petroleum Dependence, as required by AB 2076, which stressed the need to reduce the growth in
demand for petroleum by raising new vehicle fuel economy standards and, also, increasing the
use of alternative fuels and advanced vehicle technologies. In November, the CEC issued its
2003 Integrated Energy Policy Report which stressed the seriousness of climate change as a risk
to California and made several recommendations for climate policy, including requiring the
reporting of GHG emissions as a condition of state licensing of new electric generating facilities;
accounting for the cost of GHG reductions in utility resource procurement decisions; using
sustainable energy and environmental designs in all state buildings; and requiring all state
agencies to incorporate climate change mitigation and adaptation strategies in planning and
policy documents. 14
These CEC reports had been produced by staff from its Energy, Transportation, and
Policy Divisions. In addition, its PIER Division was moving into high gear on climate change.
Following the publication of the impact assessment study with EPRI (Wilson et al., 2003), and
the issuance of a series of roadmap documents (CEC 2003), PIER created the California Climate
Change Center in June 2003 as a “virtual” institution with sites at both the Scripps Institution of
Oceanography at UC San Diego and at UC Berkeley. The Scripps Center, directed by Dr. Dan
Cayan, a noted oceanographer and also director of Scripps’ Climate Research Division, focuses
on meteorology, physical climate modeling, and climate impacts on streamflow and fire. 15 The
Berkeley Center, directed by Hanemann, focuses on economic and policy analysis, including
climate impacts on the California water system. Working in close collaboration, the two centers
initiated a new five-year comprehensive assessment of climate change impacts on California,
including both physical and economic impacts, with a broad group of research collaborators. 16
During his first term as Governor, from 1999 through 2002, Gray Davis had pursued an
extremely cautious and middle-of-the-road approach to environmental policy, emphasizing the
enforcement of existing programs instead of introducing new programs that would expand the
14
There was also an accompanying report on Climate Change and California. The 2003 Integrated Energy Policy
Report had been mandated by SB 1389, passed in August 2002, which consolidated previous biennial reporting
mandates going back to the 1974 Warren-Alquist Act, including requirements for an overall Report and individual
reports on Electricity, Fuels, Energy Efficiency, and Energy Development. There was now to be a single biennial
Integrated Energy Policy Report; its coverage was expanded to cover municipal as well as investor-owned utilities
in California.
15
In 1999, NOAA had created the California Applications Program (CAP) at Scripps to study climate variability
and climate change impacts on water resources, wildfire and human health, which Dan Cayan has directed. For
further background on the scientific research effort on climate change in California and the western region, see
Franco et al. (forthcoming).
16
The Berkeley core research included the development of a new computable general equilibrium (CGE) model of
the California economy (Roland-Holst, 2004).
12
scope or stringency of regulation (Lucks, 2001). He had not instigated any major environmental
legislation, but he did sign AB 1493 and SB 1078 into law, which benefited his environmental
image when running for re-election in the fall of 2002. The crucial event of his first term was the
California electricity crisis of 2000-2001, when a combination of inept deregulation and market
manipulation by energy suppliers caused wholesale electricity prices to rise ten-fold and drove
state’s largest electric utility, Pacific Gas & Electric (PG&E) into bankruptcy, since retail prices
were regulated and frozen. For better or worse, Davis’ leadership during the crisis was seen as
weak and relatively ineffectual. Davis won re-election in November 2002 against a lackluster
Republican opponent in an election marked by a record low turnout.
Davis ran into trouble just a month after his re-election. In the aftermath of the dot-com
boom, California’s state budget had been sinking into a deficit. In December 2002, Davis
announced that the deficit might reach $35 billion, almost $14 billion higher than the forecast a
month earlier, larger than all 49 other states’ deficits combined. 17 By February 2003, a petition
drive was being planned to recall him; in July, the petition was certified and the recall election
was scheduled for October 7. 18 Davis lost his fight against recall through a combination of
popular dissatisfaction triggered by the budget crisis and the electricity crisis, his political
isolation, what was seen as a colorless but calculating political style, and the personable manner
and fame of his leading opponent, Arnold Schwarzenegger.
Schwarzenegger had enjoyed a successful career as a bodybuilder and movie star, but had
no previous political experience when he announced his candidacy for governor of California on
the Tonight Show with Jay Leno on August 6, 2003. As a moderate Republican, he would have
had no chance of earning his party’s nomination in a regular gubernatorial election, but this was
something different and he vaulted over his Republican rivals. 19 On the advice of his brother-inlaw Robert F. Kennedy Jr., Schwarzenegger had selected as environmental adviser for his
campaign a man of many parts, with great energy and persuasiveness, who was then running a
Santa Monica-based foundation Environment Now, Terry Tamminen. During the campaign,
Schwarzenegger stressed his commitment to environmental protection and prom…
Purchase answer to see full
attachment

We offer the bestcustom writing paper services. We have done this question before, we can also do it for you.

Why Choose Us

  • 100% non-plagiarized Papers
  • 24/7 /365 Service Available
  • Affordable Prices
  • Any Paper, Urgency, and Subject
  • Will complete your papers in 6 hours
  • On-time Delivery
  • Money-back and Privacy guarantees
  • Unlimited Amendments upon request
  • Satisfaction guarantee

How it Works

  • Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
  • Fill in your paper’s requirements in the "PAPER DETAILS" section.
  • Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
  • Click “CREATE ACCOUNT & SIGN IN” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
  • From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.