30 July 2023

M4 Lab: Coastal Flooding

In this lab, we worked with data already prepared for us. Sometimes working with maps and using data to get results can be a little more personal with the output of results. Living along the Gulf Coast during the hurricane months makes a weather forecast become more important. Having been through a few hurricanes there is usually an anxiousness that comes with the build-up of what location will be hit hardest and the stubbornness of riding it out no matter what scale it is. This was a very tough lab. There were many steps to figure out on our own and more than one way to achieve the results. To get the results of these maps there were several geoprocessing tools we needed to use. Several of the tools used were Raster to Polygon, LAS Dataset to TIN, Tin to Raster, Raster Calculator, Reclassify, and Project Raster.  

The measurement for the storm surge in this exercise was one meter which is a realistic assumption. We used meters for the analysis and typically in the US the information would be in feet. Given that a surge of 3.28 feet is a lot and may be reserved for a more intense storm. However, a high tide, waves, wind, and any previous rainfall accumulation might affect inland flooding, a scenario is created where it is better to use hypotheticals of varying heights when predicting the potential of a severe storm surge. Some populated coastal areas might be more affected than others due to being very heavily populated. Storm surge is one of many factors that need to be considered during storms but being able to predict based on GIS analysis will benefit communities. A more realistic assumption would be to include surge levels of lesser and greater heights giving a range of mapping predictions for dangerous surge levels. For emergency services in city, county, and state management it can be imperative to know the potential of a disastrous storm surge. 








22 July 2023

M3 Lab Visibility Analysis

This week’s lab offered an opportunity to work with ESRI modules. ESRI offers many options to learn more about GIS and enhance one’s GIS skillset. We were tasked with completing 4 Esri Exercises using ArcGIS Pro; Introduction to 3D Visualization, Performing Line of Sight Analysis, Performing Viewshed Analysis in ArcGIS Pro, and Sharing 3D Content Using Scene Layer Packages. Going through these exercises was not unlike a weekly UWF GIS module. It walked us through individual exercises step-by-step with examples of the results often enough to ensure we were on the right path in ArcGIS Pro.

Introduction to 3D Visualization

Before beginning 3D Analyst and Spatial Analyst licenses need to be turned on to complete these exercises or any GIS endeavor with 3D and spatial analysis. Working with 3D data opens up many opportunities to transform points, lines, polygons, point clouds, rasters, and more data types into 3D visualizations offering different perspectives of our environment. Conducting 3D analysis offers real-world insight into features and better understanding by being able to transform data beyond the 2D map.

Performing Line of Sight Analysis

Line of sight calculates the intervisibility between the observer(first vertex) and the target(last vertex) along a straight line. It determines the visibility of sight lines over obstructions consisting of a surface and an optional multipatch dataset. Geoprocessing tool Construct Sight Lines (3D Analyst) was used to calculate the lines of sight from two observer points. The output displays sight lines in green until it hits an obstacle, then it is shown in red from the other side because the view is obstructed. 

Performing Viewshed Analysis in ArcGIS Pro

The Viewshed Analysis tool determines surface locations that are visible to a set of observer features. After the determination of all the possible visible areas, the results are displayed in a 3D scene which can then be evaluated. 

Sharing 3D Content Using Scene Layer Package

ArcGIS Pro uses the ArcGIS sharing model giving a member of an organization, in this case, a student at UWF. For the member(student) to share content they must have the following:

ArcGIS Pro software, an organizational role with sharing privileges, and permissions to share with users who will access content. The content can be shared with your organization, the public(everyone), a group, or kept private. 

In the sharing 3D content workflow presenting the data must be decided about how to present it. The data is loaded, then one decides on using either a local scene or a global scene, so define an area of interest,  add surface information, then convert data from 2D to 3D. The final tool is Create Scene Layer Package which produces a slpk file having many features in it for the shared map display.


16 July 2023

M2 Forestry and LiDAR

 This week’s lab work involved working with a raster dataset of the Shenandoah National Park, Virginia. The data files are quite large so there were issues with loading the maps with all the finished data. We used several geoprocessing tools to produce the final map results of canopy density, digital elevation model, digital surface model, and LiDAR file. LiDAR(Light Detection and Ranging) allows for more precise information to be recorded on the Earth’s surface. 

In this lab, we calculated the forest height and biomass density of the vegetation compared to the ground. Below are the tools we used in this lab. 
Tools used for forest height.
*Point File Location = summarizes the contents of the LAS files, returning the minimum bounding rectangle, number of points, average point spacing, and min/max z-values.
*LAS Dataset to Raster = creates a DEM and DSM based on ground or non-ground points
*Minus = produces a tree estimation
Tools for calculating biomass density. 
*LAS to MultiPoint = creates multi-point features using LiDAR files. The results are vegetation and ground. (Each tool beyond is duplicated for ground and vegetation.) 
*Point to raster = Converts a raster dataset to point features.
*Is Null = Creates a binary file where 1 is assigned to all values that are not null.
*Con = Performs a conditional if/else evaluation on each of the input cells of an input raster.
*Plus = Combines sum of raster counts.
*Float = Converts each cell value of a raster into a floating point.
*Divide = Divides the values of two rasters on a cell-by-cell basis.

The initial LiDAR file required each of the aforementioned tools to be processed for canopy density. The final results of the canopy density map show the values of 0 and 1. The 0 represents the lowest density while the one represents the highest density of vegetation. 

The information in the density map portrays the difference between the vegetation and ground points. This can be helpful to foresters for conservation planning in tree harvesting, terrain assessment for erosion control based on ground cover, identification and care of trees, and protecting forests as a whole. The density map aids foresters in forest maintenance more efficiently through maps. Remote sensing offers great detail through LiDAR providing foresters with more detailed maps beyond the usual topographic map.

Final Density map



Shenandoah National Park


09 July 2023

M1 Crime Analysis

 For this week’s lab assignment, we performed crime analysis for the Washington, D.C., and Chicago areas. In the first part of the lab, we created choropleth maps using data already provided to us. Performing spatial joins between feature classes gave us results for high burglary rates in the D.C. area. The latter part of the lab consisted of three parts using three different methods of identifying and visualizing hotspots in the city of Chicago. In this part of the lab, we are comparing the techniques in analyzing crime data; Grid Based, Kernel Density, and Local Moran's I. Through completing each of these steps we had to perform a spatial join, select by attributes, create layers from selection, and other geoprocessing steps to get the final output.

After setting up the environment for Chicago, I performed different geoprocessing operations to get the data for the number of homicides occurring in specific areas. The first analysis involved adding the clipped grid cells and the 2017 homicide layers and then spatially joining them. The Grid Based analysis is simple with the results based on a grid overlay of an area of .50 mile cells that are clipped to the Chicago boundary. To find areas with the highest rate of homicide I used the select by attributes to get rates greater than zero and manually counted the top 20%, the top quintile equal to 62. I added a field for these, exported the layer, and used the Dissolve tool for a smooth service area.

The Kernel Density Analysis uses search distance from the radius outward from a point of the highest value. This analysis covers a large area and accounts for outliers. The Kernel Density toolset was used in the analysis of this part. The same data from the Grid Based analysis were used. The toolset parameters needed to be set up with input features, distance, and because rasters are used in this analysis cell size and an output raster needed to be designated. After reclassifying the data for two break values, calculated by multiplying 3 times the mean, and converting the raster to a polygon, I selected attributes having a value of two giving me the final output. The output numbers were different from the Grid Based.  

The Local Moran’s I identifies significant clusters and outliers in data. Using the same data for this analysis the output numbers were different from the previous two. Local Moran’s I used weighted features to get results, the parameters for this were the number of homicides per 1000 housing units for the Chicago area. The geoprocessing tool is Cluster and Outlier Analysis (Anselin Local Moran's I) for calculating spatial analysis statistics for the number of homicides. The output of the analysis gave me four classifications for homicides per household, the high-high cluster result is the only one used for the Dissolve tool setup.

The differences between these after calculating the results are noted in the table below for the numbers calculated. Which is better? I think Kernel Density given that a search from the point with the highest value radiating outward. The calculated result is the magnitude per unit of the cell within the study area so a per unit (cell) from a specified point of high value will give more accurate results.

Local Moran's I map

Kernel Density map

Grid-Based Analysis map


UWF Student. Aspiring GIS Analyst.

Key takeaways from a GIS job search

  A job search can be daunting, time-consuming, and frustrating. There are words to add to that short list that are more-or-less synonyms of...