Urban planning community | #theplannerlife

+ Reply to thread
Results 1 to 11 of 11

Thread: Image preference survey: how many images is ideal?

  1. #1
    Cyburbia Administrator Dan's avatar
    Registered
    Mar 1996
    Location
    Upstate New York
    Posts
    14,737
    Blog entries
    3

    Image preference survey: how many images is ideal?

    I'm going to be conducting an image preference survey as part of the planning process for a corridor plan we're creating. I have a couple questions about the process. How many images are usually presented? How long are the images displayed?
    Growth for growth's sake is the ideology of the cancer cell. -- Edward Abbey

  2. #2
    Cyburbian boilerplater's avatar
    Registered
    Dec 2003
    Location
    Heaven or Las Vegas
    Posts
    916
    My brief experience with these surveys was with people who had worked for the guy who claims to have invented visual preference surveys, Tony Nelleson. They would show 4 comparable images at once for about 30-40 seconds, but longer if anyone requests. There may be a survey on Nelleson's website.
    Adrift in a sea of beige

  3. #3

    Registered
    May 1997
    Location
    Williston, VT
    Posts
    1,371
    My approach, and I was doing it for years before I ever heard of Nelleson, is one image at a time, using a well-designed rating sheet. You can go three images a minute at most, and you have to let people slow you down, if need be. You can use quite a few images with this approach, where you are rating them individually, rather than comparing. One of the most successful applications I did was with a group doing a series of meetings. We began each session with a VPS of one part of the community. By the conclusion of the meetings, we had rated all of the significant views in a large rural area.

  4. #4
    Member
    Registered
    Aug 2004
    Location
    small town, MI
    Posts
    5
    Quote Originally posted by Dan
    I'm going to be conducting an image preference survey as part of the planning process for a corridor plan we're creating. I have a couple questions about the process.
    Then you shouldn't be doing one. They are, in a word, bunk. The reason has to do with poor sampling, poor method, and poor analysis.

    To get an idea of what a proper method looks like, here is a snip of "VISUAL QUALITY SCIENCE: PREDICTIVE EQUATIONS FOR MICHIGAN'S LANDSCAPES" by Jon Burley at Michigan State Univ.:
    In addition, I employed an environmental quality index similar to the index presented by Smyser (1982). Burley (1997) presents the list of independent variables selected for the study. I followed Shafer's general methodology to record information from the photographs by dividing the image into a 6.35 mm by 6.35 mm grid composed of 30 rows and 38 columns. Each variable was then measured and recorded.

    To generate a dependent variable, 50 images were randomly selected and presented to a respondent in sets of 10 images. For each set the respondent ranked the image for scenic beauty relative to the other nine images. No image could receive the same score. A 10 represented poor visual/environmental quality and a 1 represented better scenic/environmental quality. Once a respondent had completed a set of 50 slides, another 50 slides were selected without replacement and presented to a new respondent. Once the complete set of 250 slides had been assessed, the 250 slides were combined to randomly select another 50 slides. This process was completed twelve times.
    FTR, I have done quite a bit of searching for methodological research into the valitidiy of visual surveys ("Visual Preference Survey" and "VPS" are trademarked) and have found zip. I have emailed several professionals in the public opinion research field and they have all reported that they have never heard of such a method. A past VP of the American Association for Public Opinion Research and she had never heard of it. J. Campoli, co-author of "Visualizing Density" a paper published by the Lincoln Institute, informed me that her slides are to create discourse and not survey. One of the lead researchers on the What Michigan Wants survey informed me that he dropped his efforts to create statistically valid results when he realized that there were too many variables to estimate errors.

    I strongly suggest you contact one of the organizations that is there to help:

    http://www.srl.uic.edu/lansro.htm

    Doing poor survey research costs money and does no one any good (except special interests who tailor them to suit their desires).

  5. #5

    Registered
    May 1997
    Location
    Williston, VT
    Posts
    1,371
    It is true that poor survey research is useless. In fact, the book Karen and I wrote advises against using either informal or formal survey research in planning in most instances. Our reasoning is more about planning as an interactive process and the weaknesses of even the best survey research in addressing complex questions, but the difficulty of conducting valid research also figured in. But who ever said, or even implied, that a VPS is survey research in the conventional academic sense? I don't see that in Dan's query, which certainly did not merit such a high-handed reply.

    I did in fact pre-test and do some simple statistics in the case I described where the survey was stretched over several meetings, but that was just so we could talk about consistency of the results and provide a quantitative summary. Even IF I had dotted all of the "i's" and crossed all of the "t's," and prepared it for peer review (which I could have done), its purpose would still have been to stimulate discussion within the community and, should that lead, to action, to show a judge that we had a reasonable basis for our actions.

  6. #6
    Member
    Registered
    Aug 2004
    Location
    small town, MI
    Posts
    5
    Quote Originally posted by Lee Nellis
    But who ever said, or even implied, that a VPS is survey research in the conventional academic sense? I don't see that in Dan's query, which certainly did not merit such a high-handed reply.
    Since I was frank and honest, I'll assume you are replying to another poster.

    At least two things are important to understand. The first is that there is no separation between doing a survey for the purpose of peer-reviewed, scientific research and doing a survey to plumb the depths of public opinion in a community.

    A survey is generally conducted by sampling the population because measuring the population is not feasible. To make inferences from the sample to the population, the researcher must meet certain standards that satisfy the mathematical assumptions behind the sampling, whether he knows the math or not. Furthermore, myriad non-sampling biases can sneak into the process if a survey is not conducted and written properly. These are facts of life.

    If one does not use methods that allow errors to be estimated, then one cannot use the sample results to draw inferences about the population. In 1954 Cochran, Mosteller, & Tukey remarked that the need to conduct proper sampling had been learned by years "of bitter experience."

    Respectfully, if one is not conducting survey research properly, then one should not be conducting it at all. The most important reason for this is noteworthy thing number two:

    The average person is not well enough versed in survey methodology to tell the chaff from the wheat. I have had too much personal experience in this. Where I currently work, the community was thrown into apoplexy partly because of the results presented in two improperly conducted surveys. A city where I previously worked is systematically destroying the ability of vehicles to travel safely and efficiently, and much of the motivating force comes from average folk who are unable to understand that the so-called research they rely on is not valid.

    There is nothing high-minded about this: this is about protecting people from decisions being made on invalid evidence. If a survey is not rigorously performed, then it cannot be considered to give an accurate picture of the community is was intended to study.

    This is a fact that those in the planning community need to have drilled into their understandings of the world. With all due respect, I would suggest that you should meditate on this as well. Specifically,
    ...the survey was stretched over several meetings.... Even IF I had dotted all of the "i's" and crossed all of the "t's," and prepared it for peer review (which I could have done)....
    I would seriously question the caliber of any publication that would publish the results of a survey drawn from non-random samples.

    I apologize for being egregiously blunt; however communities all over are being sold a bill of good by well-meaning people who don't fully appreciate the dangers of what they are doing, and I find it hard to imagine that the importance of this can be overstated.

  7. #7
    Cyburbian
    Registered
    Feb 2002
    Location
    Townville
    Posts
    1,047
    js

    I disagree a little with you on this. I think Lee's point is that for planning purposes, VPS, if done with nuance and without a particular outcome in mind, can help generate excellent dialogue in a community. It can help gauge real design of development preferences with which subsequent discussions can be based. And that should be its purpose.

    I do not think the purpose is to produce statistically significant results.

    More on the topic of Dan's question I have never facilitated one myself but have participated in them. I am not really a big fan because my experience is that planners present them with clear biases such as cul-de-sacs bad (new cookie cutter pictures with no street trees because they have not grown) and grid streets (older neighborhoods good).

    VPS methods I have seen are like showing a picture of say a 2005 BMW 540 and a 1979 AMC Pacer, without the nuance necessary to generate real preferences.

    Good Luck Dan.

  8. #8

    Registered
    May 1997
    Location
    Williston, VT
    Posts
    1,371
    You know which poster I was responding to, and your style is clearly one you enjoy - but setting that aside.

    You and I agree that there is a lot of junk survey research done as part of planning processes, and you are correct in saying that one of the reasons why is poor sampling methods. It is equally true that one can spend thousands of dollars to retain well-respected professionals to do it right - at least as far as the sampling methodology - and still find the results to be useless because a) the questions were wrong, no matter how hard you worked on them or pre-tested, b) there was a change in the community/situation that rendered peoples' responses irrelevant, or c) the relevant decision-makers aren't impressed by statistical exercises.

    But you are either making a fallacious, underlying assumption or limiting the use of the word "survey" in your own way, or both. If this is all about semantics, it isn't worth much discussion. Practicing planners use words like "survey" so they can communicate with the public, and if you want to be picky about that, be my guest. I will be pleased to agree with you that the results of any way of communicating with folks should never be mis-represented, and that if I say or imply that the results of a VPS represent the entire community, then I have either done a heck of big project or I am mis-representing something.

    But if you are assuming that the goal of a VPS (or other similar techniques) is to obtain a representative sample of the entire community then there is a problem. What if it isn't? What if, after all of your bluntness, you don't actually understand the purpose of these techniques?

  9. #9
    Cyburbian
    Registered
    Sep 2004
    Location
    WA
    Posts
    112
    I always found environmental perception, and environmental behavior design very interesting fields. . . .these are routed in environemtnal psychology, landscape architecture, and somewhat in geography.

    The big names you shold read if you are intested in this field or if you want ot support your work with exisitng literature are zube, kaplan & kapalan, etc.

    However, this may bea good start:

    Kent, R.L., (1993). Determining scenic quality along highways: a cognitive approach. Landscape and Urban Planning, 27: 29-45.

    Abstract
    This paper describes a method for assessing scenic quality along existing roadsides in Connecticut (USA) based on a cognitive approach reflecting the complexity of human/landscape interaction. Landscape types (land use and land cover) which could be used in making spatially related land use and planning decisions were selected and photographed. A sample of representative scenes was rated for seven psychological predictors of theoretical interest. Thirty-six photographs representing nine landscape categories were shown to 249 people from three sample groups (highway residents, transportation planners, and the general public). Scenes were rated for preference on a five point scale, and mean preference ratings were calculated for individual images and landscape categories. The underlying patterns in preference ratings were examined by factor analysis which indicated four major dimensions of preference. The relationships among the landscape categories, the psychological predictors, and the dimensions derived by factor analysis are discussed. Results of this investigation indicate that preferences for roadside landscapes are determined both by pattern of land cover and land use and by psychological information. Differences in responses based on sample group or respondents' background are not discussed.

  10. #10
    Member
    Registered
    Aug 2004
    Location
    small town, MI
    Posts
    5
    Quote Originally posted by gkmo62u
    js

    I disagree a little with you on this. ...VPS, if done with nuance and without a particular outcome in mind, can help generate excellent dialogue in a community.
    I agree whole heartedly that valuable discussion can arise from the use of slidesójust as was described to me and as I mentioned above.

    What you may not understand is that when Joe Planning Commissioner or Jane Board Member or Pat the Community Activist hear or see the word "survey," they think they are seeing something from which they can guage public opinion in their communities. I know this because I have seen it many times. The fact that you or I may know the difference between surveys from which errors can or cannot be estimated, and what that implies, does not mean that such knowledge is common.

    One does not need to traffic in equivocation to make the point; the opposite is in fact the case: if the average citizen doesn't have the tools to properly understand survey research, then it is best to treat all surveys as being equivalent. One would indeed expect a person to assume that a "survey" returns meaningful results since a look in the dictionary gives definitions like "carefully scrutinize," "examine comprehensively," & "conduct a statistical survey on," which all seem to imply that the average person viewing survey results should actually expect the results to be meaningful. When I am home tonight I can check my OED if that would help.

    If the general public does not understand that planners, for whatever reason, have made "survey" a technical term defined along the lines of being a conversational tool and not something that studies public opinion, then the onus rests with planners to make that known. Around here that sure as hell ain't known. The reseacher who I mentioned above was led to believe that his visual survey would lead to statistically valid results and it wasn't until he was in the thick of it that he came to realize otherwise. The survey research professionals with whom I have communicated are universally unaware that survey has been so redefined by the planning community. And, as I mentioned, this fact appears to be completely missing at the grass-roots level.

    Lee seems to be resting his views on the idea that since a properly done survey could give bad results, then it is just as good to do an improper survey. Respectfully, that line of argumentation is not obvious. Antibiotics may not help my strep throat, but that's no reason to start going after my bodily humors.

    He also feels that I am playing semantic games; however my visit to dictionary.com suggests that we should expect any randomly chosen person to consider something called a "survey" to be well done. Parenthetically, it is ironic that he accuses me of sophistry when he is relying on shifting definitions to defend his view.

    Campoli and her partner have put together a large collection of slides to use in generating discussion & ideas. Googling for her name and the title (given above) should return the two (IIRC) papers in question.

    But the fact remains that if one goes out and announces that she is a professional in planning and she is going to be conducting a survey, then no matter how much she wishes otherwise, she is giving the imprimatur of validity that will be very difficult to shake from the public's mind. For example, one of our former planning commissioners was a mayor of a city of 30,000 and he had absolutely no idea that the surveys done here were not ones from which valid inferences could be made.

    This is a serious problem. If one performs a survey that doesn't meet the best practices of the AAPOR, then one is behaving in an unethical manner at bestóto the detriment of the community.

    What I find baffling, Lee, is that if you know that non-rigorous surveys cannot reliably give you valid results, then why would you want them for any purpose? How can you fully understand survey methodology and still conduct "surveys" which are not valid and use them as though they were meaningful? It's like a medical researcher finding that a drug has no real effect, but taking it anyway because pure chance gave it a result that appears slightly better. I cannot grasp the reasoning behind that.

  11. #11

    Registered
    May 1997
    Location
    Williston, VT
    Posts
    1,371
    You really are missing the point/s.

    But first, your patronizing attitude toward the "average Joe" is not consistent with my experience working in communities throughout the U.S for the past 30 years, and it is pretty much a self-sealing attitude: if you're sure they won't understand, it is pretty certain they won't. I have taken citizen groups through a lot of complex information - groundwater modeling for example - and while there are exceptions, peoples' ability to "get it," when it is explained in a way that is tailored to the audience it is pretty amazing. A good planner is a good educator.

    I also do not think the general public is either so stupid or so naive as to assume that any survey is well done. My experience is, in fact, just the opposite. If you show up in the most of the places I have worked with survey results of any type, you are going to encounter seriously probing questions, and if your results are counter to peoples' intuitive understanding of what's going on, you are going to have to make your case very well indeed.

    As for the rest of the arguments, it all boils down to this: the purpose of engaging in an activity that is designed to gather information with folks, whatever you call it. Perhaps that purpose is not to present a profile of the community's opinions that can be statistically validated. And if that is so, random sampling is not necessary, is it? Instead, we are talking about whatever rigor is appropriate to the purposes.

    The purpose of most VPS activities is not - indeed due to the methodological limitations, could not - be to present an accurate picture of an entire population's visual preferences (although my reading of the Kaplan's work and other parts of the literature, some of which has already been cited, is that visual preferences are quite consistent).

    Let's talk about the context of a VPS. Say that the local decision makers, based on what they hear from constituents, think that it might be appropriate to protect certain view corridors. There are several approaches to the problem. One is for the decision makers to get out markers and a map and draw out the corridors with not only no public input, but also no systematic way of either identifying or evaluating the corridors. The next thing people might do is to have a public meeting and do the same thing. Better from a participation point-of-view, but still not a method you really want to explain to a judge. So what do you do to change this from nothing more than a guess to something you would want to explain to a judge?

    You do a VPS. You go out on the road and systematically take photos of the local landscape, all with the same lens settings, from the same place on the road on the same day. You then pre-test this with some local folks to see if everyone agrees that the presentation is as uniform as possible. It takes a lot of film! You then set up a simple rating form, the typical "this view is an important part of our local heritage" to "this view does not contribute" axis and you have everyone in the community who cares enough to be there rate them. You do some statistics to see which results are most robust and which are shakiest (in VPS, I have found that some views will elicit almost complete consensus, while the average on others reflects not a neat bell curve, but the balancing of extremes - statistics help you see that and explain it), you present the results to the folks who participated for validation (or not), and all of a sudden you have taken an incoherent, disorganized body of public preferences and turned it into something you can talk about in a systematic, even quantitative way, and something you can explain to a judge. You NEVER suggest or imply that the preferences are universal or representative of folks who didn't participate. But you are no longer in the realm of "just an opinion" or "just a preference." You are in the realm of a serious examination of the issue, using a systematic approach and meeting the test the courts (I find them a lot more stimulating than peer review) apply, which is whether local decision makers had a damned good reason for acting as they did. If you don't want to call that a survey, I don't care. I'd be just as happy with VPA (A for activity) myself, but we planners have pretty much defined the term survey in that way, for this particular purpose.

    The point here is that there is more than one rigorous, systematic way of collecting useful information in the community. I have abandoned random sampling survey research as being passive in a world where citizens need to be proactive. It is also expensive to do it right, and consistent with my experience, citizens are more likely to support the results of process they could participate in, rather than a passive sampling process. I think they mostly DO know (and partly based on bad experience with junk polls) that a poll with simple yes/no answers might be pretty accurate, but that surveys are not a reliable way to find out how people feel about complex questions.

    That's enough for now. We do not disagree about the importance of rigor. But you need to widen your view of how it can be accomplished in the context of the planning process.

+ Reply to thread

More at Cyburbia

  1. Visual preference/character survey
    Design, Space, and Place
    Replies: 6
    Last post: 14 Dec 2012, 8:43 PM
  2. Online visual preference surveys
    Design, Space, and Place
    Replies: 4
    Last post: 16 Mar 2007, 3:16 PM
  3. Replies: 69
    Last post: 28 Sep 2006, 12:43 PM
  4. Comp plan image preference survey
    Make No Small Plans
    Replies: 11
    Last post: 29 Dec 2005, 10:47 AM
  5. Visual preference survey: Durban
    Design, Space, and Place
    Replies: 0
    Last post: 05 Oct 1997, 4:17 PM