top of page

Reconstructing Animal Viewsheds in Habitat Using LiDAR

Many animals rely on visual information to interpret their surroundings, including detecting predators, locating prey, and identifying conspecifics. The extent to which sightlines can extend depends on habitat structure, including vegetation density, tree distribution, and topographic variation. Visibility can be defined as the spatial extent of all unobstructed sightlines originating from a given vantage point. This spatial extent determines both the amount and direction of visual information available to an individual. Dense understory vegetation in forests can block low-height sightlines, while tall canopy layers can restrict upward visibility. Obstructions distributed across different distances and angles create heterogeneous patterns of occlusion. As a consequence, animals positioned at different heights and locations experience markedly different visual environments.


Traditional approaches to measuring visibility commonly use a fixed-size object known as a profile board, placed at a predetermined distance from the observer. Researchers photograph the board from the observation point and estimate the proportion of the board that remains unobstructed by vegetation. Measurements are typically conducted in a limited number of directions, such as the four cardinal directions. The resulting values are averaged to produce a single visibility index. This design captures visibility at only one distance and does not account for variation across other distances. The restriction to a few directions excludes oblique and lateral sightlines. In structurally heterogeneous environments, a single obstructed direction can disproportionately influence the overall estimate.


To characterize the full visual environment, the concept of a viewshed has been introduced, incorporating all possible sightlines into the analysis. A viewshed consists of sightlines radiating from a vantage point across all directions and distances. Each sightline extends outward until it intersects an obstructing structure, forming a three-dimensional representation. This framework simultaneously captures horizontal and vertical components of visibility. It also represents how occlusion varies continuously with distance rather than reducing it to a single value. Accurate reconstruction of a viewshed requires detailed three-dimensional information about the surrounding environment. Traditional profile board methods cannot readily provide such comprehensive data.


LiDAR (Light Detection and Ranging) technology addresses this limitation by emitting laser pulses and measuring their return time to generate three-dimensional point clouds. Each point corresponds to a location where the laser intersects a surface, providing spatial coordinates. These data enable reconstruction of fine-scale vegetation and terrain structure. Sightlines can be simulated by projecting rays from a vantage point through the point cloud. When a ray intersects a point, it is considered obstructed. This approach allows visibility to be quantified across all directions and distances.


The study applied a single-scan terrestrial laser scanning approach (ssTLS), in which the scanner is positioned at a height corresponding to an animal's eye level. A single scan captures the three-dimensional structure surrounding that location. The resulting point cloud can be directly used to estimate viewsheds without requiring multiple scans or data merging. Using the viewshed3d package, sightlines are projected outward from the scanner position until they intersect an object or reach a defined distance. The output is a viewshed profile that describes the proportion of unobstructed sightlines as a function of distance.


Workflow from a single LiDAR scan to viewshed estimation. (A) Sparse understory vegetation with fewer obstructions to sightlines; (B) dense understory vegetation with frequent obstruction of sightlines. In the central panels, blue points represent LiDAR point cloud data (trees, vegetation, etc.), and white points indicate the endpoints of individual sightlines. In the right panels, the horizontal axis represents distance and the vertical axis represents the proportion of unobstructed sightlines. The curve shows that the probability of encountering an obstruction increases with distance. The viewshed coefficient (VC) represents the overall extent of visible space, with higher values indicating greater visibility(Image source:Stein RM et al. (2026), CC BY 4.0 )
Workflow from a single LiDAR scan to viewshed estimation. (A) Sparse understory vegetation with fewer obstructions to sightlines; (B) dense understory vegetation with frequent obstruction of sightlines. In the central panels, blue points represent LiDAR point cloud data (trees, vegetation, etc.), and white points indicate the endpoints of individual sightlines. In the right panels, the horizontal axis represents distance and the vertical axis represents the proportion of unobstructed sightlines. The curve shows that the probability of encountering an obstruction increases with distance. The viewshed coefficient (VC) represents the overall extent of visible space, with higher values indicating greater visibility(Image source:Stein RM et al. (2026), CC BY 4.0 )

Field measurements were conducted across 25 forest plots representing a gradient from dense understory vegetation to relatively open stands. Each plot had a radius of 4 meters to match the scale of traditional measurements. Observations were made at three heights—25 cm, 75 cm, and 155 cm—corresponding to different animal perspectives. LiDAR scans and profile board measurements were collected at the same locations. Profile boards were placed 4 meters from the plot center in the four cardinal directions. Photographs were overlaid with a 10 × 10 grid to calculate the proportion of unobstructed intersections.


(A) Traditional profile board method; (B) LiDAR-based measurement (full viewshed); (C) LiDAR-based measurement restricted to a subset of the viewshed corresponding to the same conditions as the profile board method(Image source:Stein RM et al. (2026), CC BY 4.0 )
(A) Traditional profile board method; (B) LiDAR-based measurement (full viewshed); (C) LiDAR-based measurement restricted to a subset of the viewshed corresponding to the same conditions as the profile board method(Image source:Stein RM et al. (2026), CC BY 4.0 )

Relationship between visibility and eye height estimated using the traditional profile board method (PB) and LiDAR-based measurements (ssTLS), based on data from 25 forest plots(Image source:Stein RM et al. (2026), CC BY 4.0 )
Relationship between visibility and eye height estimated using the traditional profile board method (PB) and LiDAR-based measurements (ssTLS), based on data from 25 forest plots(Image source:Stein RM et al. (2026), CC BY 4.0 )

LiDAR-derived viewshed coefficients ranged from approximately 171 to 345 across plots, reflecting variation in vegetation structure. The viewshed coefficient increased consistently with observation height, indicating that higher vantage points provide greater visual access. These patterns were observed across all sampled plots. When LiDAR-derived visibility was restricted to the same spatial region as the profile board measurements, the two methods produced comparable results.


LiDAR-derived viewsheds can be segmented based on angular criteria to isolate specific components of the visual field. Sightlines can be divided into aerial and terrestrial components based on their angle relative to the vertical axis. In this study, sightlines with angles less than 45 degrees from the zenith were classified as aerial, while those greater than 45 degrees were classified as terrestrial. This separation allows analysis of visibility relevant to predators approaching from above or along the ground. Viewsheds can also be segmented azimuthally, such as restricting analysis to a forward-facing field of view of 120 degrees. Wider angular ranges, such as 300 degrees, can approximate the visual field of species with laterally positioned eyes.


A single LiDAR-derived viewshed can be partitioned into different functional components: (A) full viewshed; (B) aerial viewshed; (C) terrestrial viewshed; (D) forward-facing field of view; (E) lateral field of view(Image source:Stein RM et al. (2026), CC BY 4.0 )
A single LiDAR-derived viewshed can be partitioned into different functional components: (A) full viewshed; (B) aerial viewshed; (C) terrestrial viewshed; (D) forward-facing field of view; (E) lateral field of view(Image source:Stein RM et al. (2026), CC BY 4.0 )

This three-dimensional viewshed approach quantifies how visibility varies across directions and distances within a habitat. The resulting viewshed represents the spatial extent of potentially visible areas rather than actual perception. Visual acuity and sensory processing differ among species and are not incorporated into the LiDAR-derived estimates. The method captures how environmental structure constrains sightlines, allowing comparisons of visibility across habitats under consistent spatial definitions.


Author: Shui-Ye You


Reference:

Stein RM et al. (2026). Beyond discrete visibility estimates: single-scan LiDAR provides an efficient method for 3D viewshed estimation. Wildlife Society Bulletin.




Comments


bottom of page