new banner
 

Home
Services of Scenic Solutions
Completed landscape projects
Papers and Presentations
Beautiful landscapes
Australian and International studies of landscape quality
The science of scenery
Landscape miscellany
Landscape resources
About Dr Andrew Lothian
Contact us

Site map

PROTOCOL FOR LANDSCAPE QUALITY STUDIES

DEPENDENT VARIABLE

Dependent variables

DEPENDENT VARIABLES

COMMON RATING SCALE

Enjoying attractive landscapes involves direct immersion and participation in the landscape. The enjoyment gained is immediate and results from interaction with the landscape. It would be unusual to rate the enjoyment gained on say, a scale of 1 to 10 and to say, for example, “Wow, that is a 10 landscape!” Rather the pleasure is expressed as adjectives such as superb, attractive, beautifully, wonderful, stunning and so on. Preferences for the landscape scenes are measured using some form of rating scale which is a surrogate for indicating the degree of delight in a landscape that a person might have through viewing it.

Alternatives to rating scales include Q sort and paired comparisons. The Q sort method involves the participant sorting photographs of the scenes into different piles from low to high scenic quality. It is a time consuming process but enables the participant to shift items back and forth (Cronbach, 1970) and provides a reliable and valid measure (Pitt & Zube, 1979). Its main drawback is that it requires a physical set of photographs and a place where the participants can carry out the procedure and this tends to limit the number of participants.

The paired comparison of photographs requires the participant to discriminate between pairs of scenes and is based on the Law of Comparative Judgement (Guildford, 1954). A limitation is that the number of paired comparisons grows rapidly with the number of photographs. With n photographs, there are n(n-1)/2 pairs of photographs required; ten photographs requires 45 comparisons,  but 20 scenes requires 190 comparisons (Figure 1). It is generally impractical to go beyond about 18 pairs which require 153 comparisons otherwise fatigue affects the participants (Hull, Buhyoff & Daniel, 1984). It also suffers from the drawback of Q sort, requiring a physical set of photographs and a location where participants carry out the paired comparisons, thereby limiting the number of participants. Williamson and Chalmers (1982) used all three methods: Q sort, paired comparisons and a rating scale in their survey of north- east Victoria.

Dare (1978) devised a complex method: the impact (0 – 2) of each element was multiplied by their contribution to scenic quality (-2 to +2) and scores summed for each landform. This approach is unnecessarily complex and sometimes bewildering for participants.

A rating scale method (e.g. 1 – 10) does not have the disadvantages of the Q sort and paired comparison methods and provides a readily understood, practical and effective measure which can be used quickly by many participants.

paired comparisons

Figure 1 LCJ method paired comparisons

The rating scale provides the basis for an absolute measure of scenic quality and is an interval scale (Stevens, 1946), whereas ranking (e.g. 1, 2, 3... in order of preference) provides only a relative measure – ordinal scale. Ranking does not enable the transferability and comparison of results from one area to another. Therefore a rating scale should be used in preference to ranking of scenes.

On the basis of Kant’s dictum that beauty has no ideal (Lothian, 1999) there should probably be no upper limit to the scale as this suggests a finite limit to beauty. For analytical purposes, however, it is necessary to limit the scale at its upper end. A baseline such as 1 is also needed to bench the scale.

This is preferred over a baseline of zero as it is difficult to conceptualise the appearance of a landscape of zero value - i.e. complete absence of aesthetic appeal. In an interval scale, a zero may represent the minimum amount of scenic beauty available to observers in the area being evaluated (Hull, 1987), however it does not possess the quality of an absolute zero. While detractions in a landscape may mar it, can a landscape have a negative value? If so, then a zero quality is logically possible. However, contrary to Fines (1968) who used a scale of 0 - 32 to classify the world’s landscapes, it is considered that a zero landscape quality is intellectually meaningless. Even a flat, featureless landscape which some might regard as having the pre-requisites of a zero score has its appeal as papers on the Canadian prairies (e.g. Rees, 1977; Evernden, 1983) testify. Even the Hay Plain in western NSW, the archetypal boring Australian landscape, is not totally lacking in appeal. 

If one study uses a rating scale of 1 – 5, another 1 – 10, and another 1 – 15, then it is difficult to convert the results to a common standard. Further, if the scale is based on zero (e.g. 0 – 5), then this is not the same as a scale which commenced at 1. The scale should progress in the same direction, e.g. 1 low – 10 high, not in reverse as the 1 high – 7 low scale by Williamson and Chalmers (1982) or the 1 like – 5 dislike scale by Williams and Carey (2002). All other scales used in the Australian studies followed the low – high continuum which has wide acceptance.

The 1 – 10 scale proved the most popular in the Australian studies (Table 1), importantly it provides sufficient discrimination of landscape quality, and is therefore proposed as the standard rating scale. The discrimination provided by an extensive rating scale such as 1 – 100 can be somewhat daunting for participants and prove unmanageable.

Table 1 Rating scales in Australian preference studies

Rating scale Frequency
0 – 10
2
1 – 10
10
1 – 5
2
1 – 7
4
1 – 9
1
1 – 15
1
1 – 100
2
1 – 130
1
Total
23

In summary, the following are proposed:

  • A rating scale rather than ranking scale
  • A baseline and an upper limit to the scale; 1 – 10 is preferred
  • The scale to run from low (1) to high (10) rather than in reverse

Go to top

VIEWING TIMES

Prior to the availability of Internet-based surveys in which the viewer can spend as little or as long time viewing each scene, surveys in the past have used a range of viewing times. The time provided for viewers to rate the scenes varied from 5 to 13 seconds (Table 2). Table 4 summarises the times. The most popular is 10 seconds however such a long viewing time encourages analysis of the scene and subsequent changing of one's rating and should be avoided. .

Table 2 Viewing times of international studies

Reference Number  of Slides & Viewing time
Abello & Bernaldez, 1986
60 slides X 10 seconds
Anderson, 1981  
60 X 8, followed by 30 X 5
Anderson & Schroder, 1983
60 X 8, 80 X 5
Balling & Falk, 1982
20 X 8
Bergen, et al, 1995
21 X 10
Cook & Cable, 1995
20 X 8, 40 X 6
Daniel, et al, 1973
150 X 5
Daniel, et al, 1978
100 X 8
Herzog, 1984
100 X 15
Kaplan & Herbert, 1987
60 X 10, 10 shown at beginning to provide range
Lamb & Purcell, 1990
71 X 10
Mosley, 1989
80 X 13, 60 s break between blocks of 20
Purcell & Lamb, 1994
180 X 10
Strumse, 1994a
60 X 10, 5 filler slides at beginning and end

SURVEY

The survey involves assembling the scenes for the survey including benchmark scenes, defining the number of scenes, defining the sample of participants, providing instructions for participants and implementing the survey.

BENCHMARK PHOTOGRAPHS

I have found the use of scenes from elsewhere in South Australia provides a wider frame of reference for the ratings which better ensured that the ratings of a particular region reflected a State-wide perspective. The benchmark scenes should cover a wider range of ratings than are likely from within the study area. The ratings of scenes covering South Australia ranged from around 3 to 8, a range of 5 units whereas the range within a particular region was generally far more restricted, say two or three units. In the Barossa survey (Lothian, 2005a), the ratings ranged from 5 to 6.5, a span of only 1.5 units. However without the benchmark scenes, the ratings tended to spread over the full 1 - 10 scale, making it impossible to compare region to region. Thus without the benchmark scenes, the ratings tended to be relative to the scenic quality present within the region but the benchmark scenes helped to ensure that they reflected the full range of ratings, and enabled comparison of the results from one region to another. Ideally, national benchmark scenes should be used to provide a much broader national basis to ratings.

Apart from my studies (Lothian, 2004, 2005a and b, 2007), Prineas and Allen (1992) was the only other Australian study which included 10 photographs from outside the study region (including from elsewhere in the world) in a survey of 90 photographs (9%). A higher ratio than this is considered desirable to ensure the ratings of the study region reflect the wider range of ratings provided by the benchmark scenes. It is therefore suggested that 20% of photographs should be from outside the study area and they should be distributed throughout the survey thereby benchmarking the ratings.

NUMBER OF PHOTOGRAPHS IN SURVEY

Rating a seemingly interminable number of scenes can result in fatigue which can affect ratings. The mean number of photographs in the Australian studies was 77, although many of the later surveys tended to have more. The means by which they are viewed is also a factor, as the Q sort and paired comparison methods involve much more effort than a rating scale. For rating purposes, a maximum of 150 scenes, including 30 benchmark scenes from outside the study region is suggested. Providing the Internet is used for the survey and it is self-administered, this number does not pose the same risks of fatigue as physically administered surveys.  The study region should be represented by fewer scenes if this does not compromise its validity. However it may also be necessary to have more scenes in which case several surveys may be required. These should have some common scenes and the benchmark scenes should remain the same to stabilize the ratings.

If an Internet-based survey is used, the scenes can be viewed at the participant’s own pace, fast or slow, assuming adequate connection speed. Even a survey of 150 scenes can be processed by some participants very quickly, say 10 - 15 minutes. The brain is able to rapidly discriminate the appropriate rating for a scene (Herzog, 1984, 1985) and rapid evaluation minimizes the likelihood of analysis and revision.

SAMPLE

The size of the sample should be sufficient to reduce sample error to ≤ 5% (i.e. 0.05) which requires a minimum of 380 participants. The confidence interval falls quickly to 5% as the sample size increases towards 400 (Figure 2). The graph indicates that increasing the sample size will further reduce the confidence interval but at increasingly slower rates.

Ideally the survey should aim at a minimum sample of 400 participants. This will provide a confidence interval of 4.9 which is less than the 0.05 level (a sample of 380 will yield a confidence interval of 5). At a 95% confidence level, this means that you can be 95% certain that the results will be +/- 4.9%. Thus a result of say 67% for a confidence interval of 4.9 means that the true answer for the entire population will lie between 62.1 and 71.9.  Social research generally uses the 95% confidence level and the 0.05 confidence interval.

With use of the Internet this may be readily achieved. Participants should be 18 years or over in age as the aesthetic preferences of children can differ from adults (Zube et al, 1983).

sample size

Figure 2 Sample size (www.surveysystem.com)

Invitations to participate in the survey may be forwarded by email to individuals, community groups, government agencies and businesses in the region. The invitation should include a link to the address of the website hosting the survey. The invitation may ask the participant to forward it on to others who may be interested in participating, however if tight control is sought over participation this may be omitted (Wherrett, 1999).

Consideration should be given whether participants should be drawn solely from the study region, from a wider area (e.g. State), from elsewhere in the nation (e.g. Australia) or from other nations as well. The instructions regarding distribution should be clear on this matter.

SURVEY INSTRUCTIONS

Commence the survey with instructions to the participants. Matters to be covered include:

  • State the purpose of the survey;
  • Provide rating instructions;
  • Indicate the number of scenes and the approximate time to complete;
  • Emphasise rating of all scenes, i.e. completing the survey;
  • Explain that the survey includes scenes from elsewhere for comparison purposes to benchmark the ratings;
  • Emphasise the anonymity of participants – the personal data requested does not identify participants;
  • Urge participants to use the entire rating scale and to judge each scene on its merits. They should trust their initial instinct and not analyse their response;
  • Suggest that they avoid distractions; the survey can be left for up to 30 minutes;
  • Provide a contact point for questions – normally the client agency;
  • Ask if they want to be informed of results - ask them to leave contact email address

The survey may include questions to gauge the participant’s familiarity with the region as familiarity generally enhances ratings (Lothian 2005a; Lyons, 1983; Nieman, 1980). A simple three grade survey may be used: not familiar, familiar, very familiar, or a five grade Likert scale: not familiar through to very familiar.

To enable a comparison of the sample with the wider community, demographic information may be sought covering age, gender, birthplace and education as these are readily comparable with ABS Census data. The same categories for each should be employed as used by the Australian Bureau of Statistics:

  • Gender
  • Age: 18 – 24, 25 – 44, 45 – 64, >65;
  • Education: no qualification, diploma or certificate, degree, higher degree;
  • Birthplace: born in Australia, not born in Australia.

Go to top

INTERNET SURVEY

Several researchers have examined the efficacy of the Internet for landscape preference surveys (Bishop, 1997; Wherrett, 1999, 2000). The advantages of Internet surveys over traditional questionnaires include the following: paperless, does not require postal surveys or interviews, and automation of responses (Wherrett, 2000) to which I add:

  • Potentially enormous sample size; caution is needed however as the sample can potentially include the entire world
  • Rapid response and greater user control over the speed of the survey
  • Improved randomisation of scenes (see below)
  • Improved accuracy of response (as the results do not have to be transferred by hand from sheets as in paper-based surveys).

In Australia, household access to the Internet has risen rapidly from 16% in 1998 to 83% in 2012/13, with over three-quarters of these now using broadband connections which has risen particularly steeply to 77% of Internet access (Figure 3)

Add to this access via workplaces, schools and libraries and the overall access is much higher, possibly over 90%.

Although a minority (mainly the very young and older people) may not have access, the access available to the majority of adults should not unduly restrict the survey coverage.

The Australian experience in mirrored in other Western countries.

Internet access

Figure 3 Household Internet Access, Australia (% access)

The efficacy of computer monitors used to view scenes has been evaluated (Daniel, 1997; Wherrett, 1999, 2000). Bishop (1997, see also Shang & Bishop, 2000) has tested the true colours displayed. High correlations were obtained when the screens were compared with slides. Development of computer technology and the Internet over the past decade has further improved the depiction of scenes. 

Using the Internet for a survey in early 1997 using pre-Pentium computers, Wherrett (1999, 2000) found the average time taken to complete the survey of 20 scenes was 18 minutes with the majority taking between 6 and 25 minutes. In three studies, I found the average time was similar, with little difference between dial-up and broadband Internet connection (Table 3). However these were of surveys containing 150 scenes compared with Wherret's 20 scenes thus providing an indication of the considerable improvement in computer speed and Internet capacity over the past decade.

Table 3 Average time taken to complete surveys of 150 scenes (minutes)

Study
Nos. of scenes
Dial up Broadband
Coast (Lothian 2005a)
166
19.7
19.8
Barossa (Lothian 2005b)
150
20.6
18.2
Flinders Ranges (Lothian 2009)
147
13.5
12.2

The difference in access speed between dial-up and broadband is only slight (Table 9), indeed in the first study broadband averaged longer than dial-up. Over the five years of the studies, there has been a perceptible increase of Internet speeds.

Table 4 Time taken per scene (seconds)

Study Dial up Broadband
Coast (Lothian 2005a)
7.1
7.2
Barossa (Lothian 2005b)
8.2
7.2
Flinders Ranges (Lothian 2009)
5.5
5.0

Figure 4 illustrates the time taken per scene using dial-up and broadband connections for a South Australian study. These indicate that while the majority of participants view each scene for around five seconds, a few take as long as 30 seconds per scene. These are more prevalent among the dial-up participants, suggesting that it is the result of slow download times.

Flinders dialup
Flinders broadband

Figure 4 Histograms of time taken per scene, dial-up and broadband (Flinders Ranges Project, 2009)

At the commencement of a survey, several example scenes should be shown together with the rating scale to familiarise the participants with the nature of the scenes. These scenes also unconsciously cue the participant to the relative rating of each scene. The survey proper would then commence. Prineas and Allen (1992) used two introductory scenes, but up to six are suggested to adequately sample the range of scenes.

Figures 5 to 9 illustrate a survey, commencing with the introductory page, the demographics, example scenes and the survey proper, ending with a thank you page.

Flinders Survey page 1
Figure 5 Introductory page of Flinders Ranges landscape survey

Flinders survey page 2
Figure 6 Demographics page of Flinders Ranges landscape survey

Flinders survey p3
Figure 7 Example page of Flinders Ranges landscape survey

Flinders survey rating page
Figure 8 Rating page of Flinders Ranges landscape survey

Flinders survey closing page
Figure 9 Closing page of Flinders Ranges landscape survey

Use of the Internet to host the survey provides a number of advantages over alternative means of participant participation. The following protocols should be used.

  • Rather than displaying scenes for a set time (e.g. 7 seconds), scenes should be changed by the participant allowing them to proceed at their own pace. This prevents them being frustrated if the survey seems slow, or being anxious that the scene will change before they complete their rating. Automatically tally the ratings on a data base as they are made.
  • Rather than having all the scenes set in a single random order, randomise afresh all remaining scenes following the rating of each scene. Randomising scenes overcomes the order of the scenes affecting the results – with attention waning towards the end, or the rating of a scene being affected by the previous scene.
  • Enable the survey to allow participants to leave the survey and return to it (within 30 minutes) due to interruptions, phone calls, etc. Also they can exit the survey at any point and provide comments without completing the survey.

On completion of the survey, the participant is thanked. An opportunity can also be provided for participants to provide comments on the survey and to request feedback on the results of the survey.

With use of the Internet large samples may be readily achieved. Participants should be 18 years or over in age as the aesthetic preferences of children can differ from adults (Zube et al, 1983).

Invitations to participate in the survey may be forwarded by email to individuals, community groups, government agencies and businesses in the region. The invitation should include a link to the address of the website hosting the survey. The invitation may ask the participant to forward it on to others who may be interested in participating, however if a tight control is sought over participation this may be omitted (Wherrett, 1999).

Consideration should be given whether participants should be drawn solely from the study region, from a wider area (e.g. State), from elsewhere in the nation (e.g. Australia) or from other nations as well. The instructions regarding distribution should be clear on this matter.

Table 5 summarises the eleven surveys carried out by the author, with all but the first one using Internet surveys. These indicate the large samples that are possible through use of the Internet. Useable responses generally means the completed surveys. The confidence interval for most of these surveys is much less than 5% which are excellent results.

Table 5 Summary of survey responses

Survey Participants Useable responses % useable Confidence Interval
South Australian Landscapes 2000
319
319
100%
5.49
Wind farms 2003
454
311
68%
5.56
Tree amenity 2004
619
440
71%
3.94
Coastal viewscapes 2005
3324
2200
66%
2.09
Coastal development 2005
2398
1659
69%
2.41
Barossa landscapes 2005
2260
1210
53%
2.82
River Murray landscapes 2007
2138
1673
78%
2.40
River Murray development 2007
1427
1259
88%
2.76
Flinders Ranges 2009
3549
2422
68%
1.99
Generic Landscapes Project 2012
204
204
100%
6.86
Lake District National Park Project 2013
540
502
93%
4.37

 

 

 

 

 
 
 

© 2009-14 Scenic Solutions   Privacy   Terms of Service   Site Map   Contact us