By comparing line of sight DEM based viewsheds with a manual viewshed the different amount of error associated with different types of DEMs becomes evident. I compared a manual viewshed against a 30 meter DEM from the Shuttle Radar Topography Mission (SRTM), a 30 meter DEM from the National Elevation Dataset (NED), 10 meter DEM from the NED and 3 meter DSM created from a .7 meter lidar cloud. The same location and elevation was used in all the different viewsheds, the manual viewshed was both created using Google Streetview and I physically went to the location to verify the viewshed.
To create the viewsheds I used Military Analyst for ArcGIS, this allowed the resulting viewshed to be limited by a range and the output was in vector format instead of raster format. In order to calculate differences between the viewsheds the vector data was converted to a raster format using the same cell size and clipped to the same area to make sure each viewshed had the same number of pixels.
To quantify these findings I compared the difference between the pixels in the manual viewshed and the DEM viewshed. I counted the differences where the manual viewshed’s pixel was visible and the dem viewshed was non-visible (V/NV), this was repeated for for visible/visible (V/V), non-visible/non-visible (NV/NV), and non-visible/visible (NV/V). This method was used to compare the total number of pixels between viewsheds and total percentage of similar pixels.
These findings are far from complete but a step towards understanding the strengths and weaknesses in viewshed creation. Further work would be to accurately account for the manual viewshed not taking tops of trees and buildings into consideration and to clean up the lidar DSM model by removing non-view obstructing artifacts such as power lines that show up as fence like objects.
I validated the Google Street View based viewshed and found the viewsheds were correct to about 350-425 meters. After that point the images in Google Street View became too pixilated to reliably view the location where the image terminated. In the left image, the area in red highlights the area I could not see in the Goolgle Images but could when in person.
In the example to the right I could see to about 425 meters to the north and the south, beyond that point it became too blurry in the images. When I went to the location, the terrain was very flat and I was able to see approximately 2,000 meters to the south and make out the highway underpass and overpass with the aid of ten power binoculars. Looking to the north the image probably extended at least 2,000 meters but became too hazy to tell where the view terminated.
It should be noted that the distance limitation attributed to the Google Street View images may be too conservative but it was furthest I felt comfortable extending the viewshed based on the pixilation and discernable features in the imagery. Also for the purpose of these viewsheds I am most concerned about being able to discern person or vehicle sized objects on the ground, so I ignore if I can see the tops of trees or buildings.
Most viewsheds utilize digital terrain models (DTM) as the basis for what can and cannot be seen from any one location. DTMs are usually synonymous with a digital elevation model (DEM) and show the bare earth of an area. I have heard these viewsheds be described as “showing the best possible circumstances.” However, because the DTM does not take into account vegetation or man made structures the results can lead viewers to believe more is visible than reality.
If possible a digital surface model (DSM) should be used, which shows elevations on the top surfaces of buildings, vegetation, man made structures and any other object elevated above the bare earth. However, DSMs are usually collected with specialized equipment such as light ranging and detection (lidar) systems or interferometric synthetic aperture radar (IFSAR) systems.
If these options are not available to a user, manual viewsheds can be created by either going out to the field or by examining panoramic photos to see what is visible. This information is then saved to a GIS for later use. The purpose of the manual viewshed is to find the best locations to place potential sensors, collect the capabilities of existing sensors, verify the quality of viewsheds created otherwise and/or to validate conceptual models.
By using Google Street View I was able to create a manual viewshed. While viewing the panoramic view I switched back and forth between Google Earth and ArcGIS which utilized the 2008 NAIP imagery as a baselayer to assist in the digitization of the viewshed. Because Google Earth does not give dates of its imageryit is beneficial to view the location with multiple sources of imagery such as local.live.com‘s bird’s eye view in case the panoramic imagery and the GIS base layers have significant differences.
Google Earth has a layer that consists of YouTube videos, I realize this is old news but I have not had an opportunity to play with this layer until recently. I was really excited when I first started using this layer, as it has the potential to allow somebody to get a feel for an area. I realize that services such as Flickr and Panoramio already this sense of immersion but video would allow it to go one step further. Not only could someone see a particular hill, they could watch how fast someone could transverse the terrain. Not only could someone see how people look like in their local environment, they could see how they act, their manner of speech, their accent, lingo particular to an area, as well as other local characteristics. What a good example of human geography.
I tested a few geographical areas that I am somewhat familiar with and unfortunately most of the YouTube videos are incorrectly located. The following picture depicts a raceway, Six Flags, some concerts, people interacting in different settings and a few others. However the geography of the place is ranch-land with an arroyo (ravine) running nearby. There are no structures and there are no people.
I looked at the YouTube site for information on how they geotag their videos and the only thing I could find was that users supply the coordinates. This seems rather odd for those places that have multiple geotagged videos that are obviously in the wrong location. Perhaps they use a combination of user geotagged videos and some other automated geotagging algorithm for those videos that are are left blank. That is pure conjecture based on the results I saw in a few select areas.
I looked at other areas and these areas seems to have videos consistent with what I would expect for a given area and video that is inconsistent for certain types of environment. My limited experience with the incorrectly geotagged videos gives me some reservations about accepting all results at face value.