20 November 2023

Module 5: Unsupervised & Supervised Classification

 In the first exercise, the UWF campus was used for our data. We used a MrSID file (multiresolution seamless image database), specifically a high-resolution aerial photograph of the UWF campus. In ERDAS for this part, I completed an unsupervised classification of the UWF image. I loaded the image with the correct parameters. Once loaded, under the Raster tab, the details mattered for the setup in the analysis. Name the Input and Output rasters, Output Signature unchecked, Number of Classes to 50. I had to change the color schemes to R3, G2, and B1. Setting the skip factors to 2 helped expedite the processing time because of how every other pixel gets analyzed. Once the analysis is complete, I reclassified the 50 classes in the attribute table. The reclassification meant changing the colors and establishing categories for the colors to represent. After the reclass and setting the classifications, I recoded the Class Names. This is an extension, my words, of the reclass process. It is necessary to establish a connection with the data for the final product (file). In Recode the values entered are to combine the reclass values into one numbered value, i.e., 1-4 for Grass then becomes 1 for all four of the Grass features in the image. Once these are Merged it is easier to conduct analysis, I calculated the percentage difference between permeable and impermeable surfaces.

In the second exercise, we used Grays Harbor, Washington imagery. In this part, I conducted a Signature Collection for Supervised Classification. Once the image is loaded and the Signature Editor tool is opened, I need the Drawing tool to draw polygons around areas for coordinates that are given. This is one way of gathering the data I wanted for an Area of Interest (AOI). The next way is by Creating Signatures from the AOI Seed tool by growing a region around an area where land cover is known. Two signatures that are used with this tool are Spectral Euclidean Distance and Neighborhood. When using this tool, is very similar to the previous method but this one is from the Inquire Legacy box where I set the At Inquire and the distance value, 11 for the Spectral Euclidean Distance, but it could be anywhere from 0-255 for pixel value. I captured the areas of interest as polygons, saved them and now they are ready for analysis. 

Final map


14 November 2023

Module 4 Lab: Spatial Enhancement, Multispectral Data, and Band Indices

 

Image with darker pixel values

Image with higher pixel values

Image with different levels of reflectivity

In this week's lab, we were asked to perform exercises with included tasks to increase our understanding of spatial enhancement, multispectral data, and band indices. These exercises and tasks were to be conducted in ERDAS Imagine and ArcGIS Pro. In the first part of the lab, we analyzed images using different methods of filtering for high pass, low pass, and sharpen filter. The high pass filters offer advantages when looking at edges using the Range statistic creating edge detect. The low pass filter generalizes the images because the filter is being run on images that have already been filtered. Lastly, the sharpen filter is similar to the high pass filter only slightly sharpens details. In the last part of the lab, we examined histograms to locate three areas in an image based on pixel values. Grouping of pixel values towards one end over another was how we were able to determine these locations. One where the pixel value was too low, showing darker colors, another showing lighter colors, and the final one showed different levels of reflectivity of one color. 



07 November 2023

Module 3a and 3b Lab: Intro to ERDAS Imagine

This week’s lab introduced us to ERDAS Imagine, software that is raster-based and provides tools to extract information from images while being able to change the bandwidth of the images to study them more in-depth. It was a fun lab and the biggest challenges were making sure to go “by the numbers” in lab exercise steps and working with formulas calculating the frequency, wavelength, and energy of photons for the process. We added an image provided to us and manipulated it to get an output that would be added to ArcGIS. Once we added a random selection from the image to ArcGIS we calculated the hectares of an area and created a layout of map. 

Image from ERDAS loaded into ArcGIS

The intention of the lab was to familiarize ourselves with ERDAS, after all this is Remote Sensing. The software is somewhat similar to ArcGIS in the sense that there are tools, content panes, ways of adding data, and creating data as an output to be exported. One of the differences was how ERDAS handled the raster images allowing different bandwidths to be changed in the display. ERDAS was easy to work with as long as you are patient and have a fast computer when working on the server. 


31 October 2023

Module 2a Lab: Land Use / Land Cover Classification and Module 2b Lab: Ground Truthing and Accuracy Assessment

 Module 2a Lab: Land Use / Land Cover Classification 


For this week’s lab, we were introduced to Land Use / Land Cover Classification and Ground Truthing and Accuracy Assessment. The task is to create a map classifying land in Pascagoula, MS and afterward to ground truth the points. This area has an assortment of land types and makes for learning how to classify, although not hard it is detail-oriented and a lot of fun. In the first part of the lab, I created and added a new polygon feature class called LULC. In the new feature, I created polygons for each of the classifications in Pascagoula. After creating the polygons, labels were added, and the symbology was changed to unique values. Overall I created eight level-one codes with level-two classification, see table below. 




Module 2b Lab: Ground Truthing and Accuracy Assessment


The second part of the lab involved creating 30 points and then ground truthing those locations in Google Maps, I used Google Earth. After creating a new feature class for ground truthing I used the Create Random Points tool to generate 30 points. I used the Coordinate Conversion tool to export the random point coordinates as a kmz file. Once the file was exported I loaded it into Google Earth and was able to search each coordinate to “truth” the points compared to the random points in ArcGIS. When compared to the random points 6 of the 30 were inaccurate (wrong) from Google Earth. The accuracy assessment was 80%: 24 with “YES” correlation and 6 with “NO” correlation.


The map has an underlying image I used to create polygons of each of the classifications. The codes are based on USGS Level II. There was more that could be classified in the image but due to time constraints these were the ones that summed up the area of Pascagoula quite well.



18 October 2023

Module 1 Lab: Visual Interpretation

The objective of exercise 1 was to identify Tone and Texture in an image. I created polygons that identified areas of tone ranging from very light to very dark. For texture, I created polygons for areas that ranged from very fine to very coarse. The objective was accomplished. Even though the image is older there are still many identifiable features representing both of these remote sensing characteristics.


The objective of exercise 2 was to identify features in the image using these factors shape/size, shadow, pattern, and association. These features were created as points within each of the feature classes when added to the image. I was able to accomplish the objective. This particular task was interesting in that it made me take a deeper look into the image. It was not easy to find something relevant to each of the criteria but I did succeed. 




14 October 2023

Lab 6 Topic 3 Scale Effect and Spatial Data Aggregation

 Part 1b Scale Effects on Vector and Raster Data

This week’s lab was determining the effect of scale and resolution on vector and raster data. Another lab part was analyzing boundaries with Modifiable Area Unit Problem (MAUP), this involved looking at Gerrymandering in U.S. Congressional Districts.

For the vector data, the scale of data was 1:1200, 1:24000, and 1:100000. Because maps have different scales, a greater emphasis should be put on ensuring spatial accuracy is adhered to as much as possible. Understanding the effect of scale and resolution on vector data differs from observing raster data. 

In the first part of the lab, we used the Clip tool for our hydrography datasets with the county as the “clip to” feature. After clipping all the data to the county, we added fields and calculated geometry to get length, area, and total count.

As resolution decreases, the accuracy and details diminish. Scale expresses the amount of detail for vector data; the hydrographic features are polylines and vector data. Because the large scale map has more detail and the small scale has less detail, these show how the relationship between scale and these hydrography data are affected.

Map Scale 1:1500 Scale and resolution effects



Map Scale 1:20,000

Part 2b Gerrymandering

The Merriam-Webster Dictionary defines gerrymandering as “dividing or arranging a territorial unit into election districts in a way that gives one political party an unfair advantage in elections.” Its history dates back to the early 1800s when it became official and later defined but was known prior to this time. The Modifiable Areal Unit Problem (MAUP) is an issue with boundaries and scale in spatial analysis. It highlights potential issues of delineation, creating bias within voting areas, i.e., congressional districts. In this final part of the lab, the feature class consisted of the continental U.S. I used the Dissolve tool to amalgamate the districts and in doing so I was able to find out the number of polygons each Congressional District (CD) consisted of. The below picture is of CD 01, the compactness score from the Polsby-Popper test was the lowest of all the districts we looked at in this lab. It is the "worst offender" of having bizarre-shaped legislative districts.

Congressional District 01



04 October 2023

Lab 5 M2.2 Surface Interpolation

 This week’s lab focused on water quality in Tampa Bay, officially Surface Interpolation. It is always interesting to learn how there are different ways of studying data and interpreting results. We worked with different ways of interpolating data, specifically Thiessen, Inverse Distance Weighted (IDW), and Spline (Regularized and Tension). The data (BOD_MGL) for the study used BOD (Biochemical Oxygen Demand) in MGL (Milligrams Per Liter ) to measure data points for water quality in Tampa Bay (the body of water). We needed to determine areas with low and high water quality based on the results using different interpolation techniques. 

The techniques we used to interpolate gave somewhat similar results. The Thiessen offered the same results as the non-spatial information. The IDW was very similar to Thiessen, only offering a difference in standard deviation. Spline was the interpolation technique that offered the greatest variation from the others. Interpolation offers a way to study the spatial distribution of phenomena across a wide range of points. These are a few of those options.


Thiessen-This interpolation technique contains only a single point having any location within the output polygon closer than any other point, it defines an area around a point. It divides areas into proximal zones or polygons. Thiessen polygons are also called Voronoi polygons or Voronoi diagrams.


Inverse Distance Weighted (IDW)-As the name suggests it relies on inverse distance from points with emphasis placed on the nearest ones. The mapped variables have decreased influence as distance increases from the sampled location.


Spline-This technique has two types: Regularized and Tension. Regularized offers a smooth changing surface and has values that may be outside of its range. Tension offers a less smooth surface but has data that adheres closer to sample data ranges. Both can be altered in the number of points and the weight when running the tool.

Thiessen Polygons



Spline Regularized


Spline Tension


Inverse Distance Weighted

Compare these different interpolation techniques. They are similar but do offer different levels of insight to this study area. 

20 September 2023

Lab 4 M2.1 Surfaces - TINs and DEMs

This week’s lab delved into TINs (Triangular Irregular Networks) and DEMs(Digital Elevation Models) created, edited, and analyzed each of these. If you search in ESRI’s help, which is very informative, it distinguishes between these two where a TIN is vector-based and preserves the precision of input data while a DEM is a raster-based representation of a continuous surface. Thus, the TIN is more accurate than a DEM. It is visible in the results where triangles are completed by the points that represent the terrain surface, the preservation of data. Dr. Zandbergen in his lecture is also clear on the distinction between the two. “[For a DEM] there is nothing special about its data structure or values. So you have to know that the cell values represent elevation to be sure that it is indeed a DEM. This is different from a TIN, where the data model itself defines a 3D surface.”

In ArcGIS, we worked in Local Scene to better represent the data in 3D. Once again there were several tools used and sequential steps to follow to get the desired output for the deliverables. For the TIN and DEM, the vertical exaggeration was set at 2.00 and contour lines were set at 100m. The TIN is more accurate and what makes the TIN more accurate is the points that can be randomly spaced for the measured elevation points. One of the biggest differences is the contour lines of the DEM appear more widely spaced in some areas and closer in others where the terrain is very steep. Even though it closely resembles the TIN it is apparent that the accuracy is not as complete, even with the interval set at 100 for each. Even with the better accuracy of the TIN the DEM is still close to it as a representation for elevation.

Comparable view of very close proximity to the same location
I have not worked with TINs very much. It is interesting to see the differences between it and a DEM. They are close and both serve a purpose. It is clear that both represent terrain effectively but for better accuracy go with the TIN. The image below shows a TIN where slope, aspect, and edges were added. Once these renderers are set you can click on any triangle to get the value of each.
I like this view of a TIN.




13 September 2023

Lab 3 M1.3 Data Quality - Assessment

 

Initial map showing visual difference

This week’s lab assignment was about Data Quality - Assessment, specifically road network quality and completeness. The information to work with is comprised of TIGER 2000 and Street Centerline data for roads in Jackson County, OR. The task is to determine the length of roads in each dataset in kilometers and which roads are more complete, 


When comparing datasets the first step is to make sure both are in the same coordinate system to ensure consistency. For each feature, the attribute table showed me what field should be measured to determine the total length of roads so summarizing the field length was next. I converted those results from international feet into kilometers. This was interesting because international feet are not as familiar to me as US survey feet, which are seen frequently in our lab datasets. These two are very similar but have differences in decimal spots that in results show greater precision and accuracy. 

After getting these results the next step is to measure the total length of both features within each grid. This was somewhat similar but I used the Summarize Within geoprocessing tool to get these results. The Grid was the input, and each road feature was the input summary. For the summary fields of the tool, I used the specific length fields to summarize the total length of the output. I added a field after spatially joining the two road files to determine the percentage of roads within each. The TIGER Road data are the more complete dataset over the Street Centerlines.

Total grids in which TIGER data are more complete: 165

Total grids in which Centerlines are more complete: 132

Road Length total of TIGER Roads: 11,253.5 KM

Road Length total of Centerlines Roads: 10,671.2 KM

The formula for calculating the results is:

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = (𝑡𝑜𝑡𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑐𝑒𝑛𝑡𝑒𝑟𝑙𝑖𝑛𝑒𝑠 − 𝑡𝑜𝑡𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑇𝐼𝐺𝐸𝑅 𝑅𝑜𝑎𝑑𝑠)/(𝑡𝑜𝑡𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑐𝑒𝑛𝑡𝑒𝑟𝑙𝑖𝑛𝑒𝑠) × 100%

The Street Centerline data “is intended for use as a reference map layer for GIS maps and applications and represents all known roads and trails within Jackson County, Oregon. The accuracy of this data is +/- 10 feet for state highways and in the urbanized portions of the county and +/- 30 feet in the remote rural areas.” (Street Centerlines Metadata) 

The TIGER data used is from 2000 with an accuracy of 5.975-6.531 meters for horizontal positional accuracy. Seo reminds us that “TIGER/Line data are produced by the US Census Bureau for 1:100,000 scale maps and contain various topographic features and administrative boundaries and their nominal accuracy is about 50 m (US Census Bureau 2000).” The dataset we used in this lab was gathered using a survey-grade GPS unit which used the same data but mapped it for greater accuracy.

Seo, Suyoung and O'Hara, Charles G.(2009) 'Quality assessment of linear data', International Journal of Geographical Information Science, 23: 12, 1503 — 1525, First published on: 22 September 2008 (iFirst) 

Final map showing percentage difference between roads




06 September 2023

Lab 2 M1.2 Data Quality - Standards

Street Map USA points

Albuquerque street points

Reference points

This week’s lab Is about data quality for standards of road network quality and horizontal positional accuracy. In determining the road network quality, the National Standard for Spatial Data Accuracy (NSSDA) procedures were used. The data for this were the streets of Albuquerque. We compared Street Map USA data from ESRI’s TeleAtlas and street data from Albuquerque to orthophotos of a study area. The orthophotos are the reference points for Albuquerque's two test point street maps. Street intersections are the most easily identified and easy to establish for points to compare. 

We were to choose at least 20 points so I chose 30 points from the two test points and 30 of the reference points. I created these points by first creating a point feature class in the geodatabase. From here in Edit mode I created points at intersections that all three study areas had in common. Once these were completed, I uploaded the x and y coordinates into MS Excel for good formatting and access to formulas to get results. These formulas are simple calculations of adding, subtracting, squaring, averaging, and calculating the root mean square error(RMSE). Each of these results enabled me to get the result for the NSSDA error. 

My numbers were rather large perhaps due to choosing 30 points as opposed to only 20. I maintained the same zoom level on the map scale when creating the points. The RMSE is used to calculate the NSSDA result, multiply the RMSE by 1.7308. According to the Federal Geographic Data Committee in Geospatial Positioning Accuracy Standards Part 3: National Standard for Spatial Data Accuracy, “A minimum of 20 check points shall be tested, distributed to reflect the geographic area of interest and the distribution of error in the dataset. When 20 points are tested, the 95% confidence level 4 allows one point to fail the threshold given in product specifications.” As I read this and also according to Dr. Morgan’s lab exercise, a minimum of 20 points is how I ended up with 30 points. It does not necessarily mean that 30 test points will provide greater accuracy or error in the results. However, in the article, it does not mention going beyond the number of 20 points. A journal article written by Zandbergen et al. (2011), required reading for this lab, where a test conducted used 100 sample points of data for their study of census data to determine accuracy. I mention this to show the comparison of the two studies, obviously, the census data comparison study was more comprehensive and covered a larger area.  

(Zandbergen, Paul & Ignizio, Drew & Lenzer, Kathryn. (2011). Positional accuracy of TIGER 2000 and 2009 road networks. Transactions in GIS. 15. 495-519. 10.1111/j.1467-9671.2011.01277.x.)

The results of the data tested to meet NSSDA standards were 4716.28 feet horizontal accuracy for the Albuquerque street data. The results of the data from the StreetMapsUSA data were 4481.29 feet horizontal accuracy. My numbers seem far too large for this to be considered accurate and I am not sure where I went wrong with gathering the data and running calculations. 

For the ABQ street data set: Tested 4716.28 feet horizontal accuracy at 95% confidence level. 

For the Street Map data set: Tested 4481.29 feet horizontal accuracy at 95% confidence level.



30 August 2023

Lab 1 M1.1 Fundamentals

 This is the first week of Special Topics in GIS and looks to be a class of new challenges with a different approach to data. However, after all, it is still data analysis and maps. That is what makes GIS fun and challenging to me. The title for this lab is Calculating Metrics for Spatial Data Quality. We learned the importance of accuracy versus precision and how one does not guarantee the other. Today’s cell phones and GPS units are quite accurate for the most part. They can be used to map data but should not be used without determining their accuracy and precision for horizontal and vertical positional accuracy and precision.



For this lab, we added shapefile data from the UWF repository that included waypoints and a reference point shapefile. We determined the precision and accuracy measurements of 50 waypoints that were mapped with a handheld Garmin unit with an accuracy of less than 15 meters, according to the unit owner’s manual. The points were sporadic at best with the actual location point being unclear. Once added to ArcGIS we created buffers of 1, 2, and 5 meters around an average waypoint location to make it simpler to map. After that was established we created three buffers that represented the 50th, 68th, and 95th percentile of the data. The 68th percentile was the most common representation of data. After this was completed we added the reference point shapefile that was mapped using a Trimble Pathfinder GPS unit with an accuracy of less than one meter if set up correctly, according to the manufacturer’s specifications.

Great! Now what? We have a map created and we need to make sense of the data. For this, we used Root-mean-square error (RMSE) and cumulative distribution function (CDF) to determine the accuracy and precision of the results. For this part, we used MS Excel, which was a bit of a challenge when trying to figure out the formulas to get the final results for the CDF. It was a large dataset to work with of x and y points, 200 to be exact. It helps to have some knowledge of Excel when doing this. We calculated the RMSE of the average between the x and y values which we derived from the coordinates. Once the results were determined we graphed them in a CDF, where the relationship between cumulative percentage and the errors in xy were shown in a scatterplot. The CDF helps with visualizing the results. 

XY Coordinate errors and percentages

Error metrics for the GPS positions

The main point of our lab this week is determining accuracy and precision. Both can be confusing and may seem interchangeable but they are not. According to Bolstad in GIS Fundamentals, “accuracy measures how close an object is to the true value.” He defines precision as “how a dispersed a set of repeat measurements are from the average measurement.” In these definitions, you can see how related they might be but still different. The introduction of bias is not something to overlook in data and will certainly affect accuracy and precision. It could be measured in the wrong coordinate system or faulty equipment. 


11 August 2023

M6 Lab(II) Suitability Analysis / Least Cost Path Analysis

This week’s lab was a GIS challenge of challenges. Tough! It was the sheer length of the lab coupled with being unfamiliar with some of the geoprocessing tools while ensuring the correct parameters and inputs were set. Data not found after saving a completed map, even though the words echo when working in ArcGIS, “save and save often”. Yes, I saved and saved often. Several times ArcGIS crashed when running analysis tools or loading or adding data. I have had issues in the past when working in ArcGIS but this is above the rest with problems. Frustrating but it is what it is and there was much to be done.

Once again the data were provided for us, always helpful and I am grateful for that. Through this analysis, I will create map results for suitability and least cost analysis. There were several tools at our disposal as with ArcGIS there is usually more than one option to get the desired results. There were four scenarios with goals to achieve through different analyses. 

As we progressed through the lab it was common to refer back to previous steps for reminders of how to prepare and analyze the data. The first scenario was preparation for the rest of the lab work as we worked to complete four total scenarios. The first step in a study is to state the problem. In doing this, you can break down the problem, establishing what data will be needed to complete the analysis. I used ModelBuilder in ArcGIS for Boolean Suitability in creating a flowchart. ModelBuilder helps present the visual of the data in one place in a clean and neat format. The primary data for this lab in each scenario was elevation, land cover, roads, and rivers. In each scenario the tool Reclassify was used to set values for the output. In completing the reclassification of data Boolean Suitability was applied, either it fits the criteria or it does not. Once completed the lab would offer 30 deliverables to turn in and along the journey, multiple geoprocessing tools would be used to create the deliverables.


ArcGIS
ModelBuilder Boolean Suitability

Scenario 1 - Identify cougar habitat and create a suitability map of cougar behavior

Data used: Slope (elevation data), land cover, roads, and rivers


Scenario 2 - A property developer wants an estimate of how much land would actually be able suitable to build on. Identify suitable locations for a developer to build on. 

Data used: Land cover, soils,  slopes,  roads, and rivers


Scenario 3 - An oil company wants to install a pipeline and needs analysis on an area accounting for slope, river crossing, and river proximity.

Data used: Elevation–a DEM of the study area, Rivers–a vector version of rivers in the study area, Source–the origin of the pipeline, Destination–the destination of the pipeline, and Study area–polygon of the study area


Scenario 4 - Create a model of the potential movement of black bears between two protected areas, creating a corridor.

Data used: Coronado1 and Coronado2 – these are two existing areas of the Coronado National Forest, Elevation – elevation model for the study area,  Landcover – land cover raster for the study area; the categories are described in the landcover_codes.dbf 

File, and Roads – shapefile of roads in the study area


I created a corridor on the map but it is not visible due to technical difficulties. I reran the tools and it only seemed to get worse. It connects the two areas of Coronado National Forest which allows for the black bear to travel safely through the corridor between the national forests.



These scenarios offered a progression in necessary skills that involved much reading of ArcGIS help with details of how best to use a tool to achieve the best results. It can be a tedious and time-consuming process when you work and attend online school at the graduate level. It is also part of the learning process and aids in creative thinking to devise the best way to analyze the data. The ModelBuilder for Boolean Suitability set up at the beginning of the lab did help me visualize the analysis and what the outcome should be. I need more work with it but I see how it can be a timesaver in offering results more efficiently. For each scenario afterward I did not use the ModelBuilder. Reclassifying the data was involved in each scenario which can be a Boolean setup answering yes or no, true or false. 

The tools involved throughout the lab are Reclassify, Euclidean Distance, Slope, Buffer, Cost Distance, Cost Path, and Weighted Overlay. At times it was necessary to get results through an assortment of other tools because with GIS there are options. If I cannot figure out a way I consult help which oftentimes mentions other tools to achieve the same results.




M6 Lab(I): Suitability Analysis / Least Cost Path Analysis

 Weighted Overlay Tool-Task=Use to find a suitable area to build for a property developer.


The Weighted Overlay tool is a commonly used approach in solving multicriteria problems for suitability problems. In module 6 I worked through a scenario that involved identifying suitable locations for a property developer to build on. They need an estimate of how much land would actually be suitable to build on. The data used for this task is land cover, soils,  slopes,  roads, and rivers. The weighted overlay tool is ideal to use in this situation where rasters are the input and output. To get the data to use I ran the Euclidean Distance for the roads and rivers. These inputs required a suitability rating based on distance. Hence the Euclidean tool which gives the results in raster form. Rasters for each data input are what I needed to run the weighted overlay. Once the roads and rivers are changed into rasters, each of the criteria was reclassified into ranges and categorized. Soils were put into a class and landcover was sorted by value. Now I can run the weighted overlay using a raster file as an input, essentially adding all these together to get one result. The results are in the map comparison below. 


The raster inputs require evenness of being 100% total so they were each given a 20% weight across the 5 layers. An unequal weighted overlay is simply offsetting the percentages given to each input, in this case, the slope was given 40%. To account for the offset roads and rivers were decreased to 10% each as their weighted input.  I was unable to attain the lab assignment’s value number of 4. As you can see there were only three for my output. I ran this over and over but still was only able to come up with these results. At some point, the map simply stopped working and caused me to go back to the beginning of this segment to start over, at least twice. 

05 August 2023

M5 Damage Assessment

 This week was about storm assessment and the determination of the damage. We were provided the data to work with which consisted of a small area on the Ocean County, New Jersey coastline similar to last week’s coastal area of New Jersey. This was a fun lab to work through. Working with data, creating maps using aerial imagery of an area affected by a hurricane reemphasizes how destructive they can be. Although they do not happen each year hurricanes are common enough that you learn to appreciate the damage one can do. It is never an ideal situation to go through and an experience you will never forget. 

The initial data was provided but still required processing. To do this several geoprocessing tools were used to create and analyze the data. These included creating mosaic datasets and adding rasters to them, and creating and setting a domain with a feature dataset. After the layers were created there was editing to be done, digitized points for the buildings, and a digitized polyline created for the coastline. These data are similar to what FEMA uses in post-disaster recovery efforts. The before and after imagery was provided, this was added to the map and used for damage assessments to the buildings. It was eye-opening when using the aerial imagery, being able to see inundated areas where the storm surge pulled the sand from the beaches onto the streets overwhelming the buildings nearest the coastline. 



Analysis results with attribute table


30 July 2023

M4 Lab: Coastal Flooding

In this lab, we worked with data already prepared for us. Sometimes working with maps and using data to get results can be a little more personal with the output of results. Living along the Gulf Coast during the hurricane months makes a weather forecast become more important. Having been through a few hurricanes there is usually an anxiousness that comes with the build-up of what location will be hit hardest and the stubbornness of riding it out no matter what scale it is. This was a very tough lab. There were many steps to figure out on our own and more than one way to achieve the results. To get the results of these maps there were several geoprocessing tools we needed to use. Several of the tools used were Raster to Polygon, LAS Dataset to TIN, Tin to Raster, Raster Calculator, Reclassify, and Project Raster.  

The measurement for the storm surge in this exercise was one meter which is a realistic assumption. We used meters for the analysis and typically in the US the information would be in feet. Given that a surge of 3.28 feet is a lot and may be reserved for a more intense storm. However, a high tide, waves, wind, and any previous rainfall accumulation might affect inland flooding, a scenario is created where it is better to use hypotheticals of varying heights when predicting the potential of a severe storm surge. Some populated coastal areas might be more affected than others due to being very heavily populated. Storm surge is one of many factors that need to be considered during storms but being able to predict based on GIS analysis will benefit communities. A more realistic assumption would be to include surge levels of lesser and greater heights giving a range of mapping predictions for dangerous surge levels. For emergency services in city, county, and state management it can be imperative to know the potential of a disastrous storm surge. 








22 July 2023

M3 Lab Visibility Analysis

This week’s lab offered an opportunity to work with ESRI modules. ESRI offers many options to learn more about GIS and enhance one’s GIS skillset. We were tasked with completing 4 Esri Exercises using ArcGIS Pro; Introduction to 3D Visualization, Performing Line of Sight Analysis, Performing Viewshed Analysis in ArcGIS Pro, and Sharing 3D Content Using Scene Layer Packages. Going through these exercises was not unlike a weekly UWF GIS module. It walked us through individual exercises step-by-step with examples of the results often enough to ensure we were on the right path in ArcGIS Pro.

Introduction to 3D Visualization

Before beginning 3D Analyst and Spatial Analyst licenses need to be turned on to complete these exercises or any GIS endeavor with 3D and spatial analysis. Working with 3D data opens up many opportunities to transform points, lines, polygons, point clouds, rasters, and more data types into 3D visualizations offering different perspectives of our environment. Conducting 3D analysis offers real-world insight into features and better understanding by being able to transform data beyond the 2D map.

Performing Line of Sight Analysis

Line of sight calculates the intervisibility between the observer(first vertex) and the target(last vertex) along a straight line. It determines the visibility of sight lines over obstructions consisting of a surface and an optional multipatch dataset. Geoprocessing tool Construct Sight Lines (3D Analyst) was used to calculate the lines of sight from two observer points. The output displays sight lines in green until it hits an obstacle, then it is shown in red from the other side because the view is obstructed. 

Performing Viewshed Analysis in ArcGIS Pro

The Viewshed Analysis tool determines surface locations that are visible to a set of observer features. After the determination of all the possible visible areas, the results are displayed in a 3D scene which can then be evaluated. 

Sharing 3D Content Using Scene Layer Package

ArcGIS Pro uses the ArcGIS sharing model giving a member of an organization, in this case, a student at UWF. For the member(student) to share content they must have the following:

ArcGIS Pro software, an organizational role with sharing privileges, and permissions to share with users who will access content. The content can be shared with your organization, the public(everyone), a group, or kept private. 

In the sharing 3D content workflow presenting the data must be decided about how to present it. The data is loaded, then one decides on using either a local scene or a global scene, so define an area of interest,  add surface information, then convert data from 2D to 3D. The final tool is Create Scene Layer Package which produces a slpk file having many features in it for the shared map display.


16 July 2023

M2 Forestry and LiDAR

 This week’s lab work involved working with a raster dataset of the Shenandoah National Park, Virginia. The data files are quite large so there were issues with loading the maps with all the finished data. We used several geoprocessing tools to produce the final map results of canopy density, digital elevation model, digital surface model, and LiDAR file. LiDAR(Light Detection and Ranging) allows for more precise information to be recorded on the Earth’s surface. 

In this lab, we calculated the forest height and biomass density of the vegetation compared to the ground. Below are the tools we used in this lab. 
Tools used for forest height.
*Point File Location = summarizes the contents of the LAS files, returning the minimum bounding rectangle, number of points, average point spacing, and min/max z-values.
*LAS Dataset to Raster = creates a DEM and DSM based on ground or non-ground points
*Minus = produces a tree estimation
Tools for calculating biomass density. 
*LAS to MultiPoint = creates multi-point features using LiDAR files. The results are vegetation and ground. (Each tool beyond is duplicated for ground and vegetation.) 
*Point to raster = Converts a raster dataset to point features.
*Is Null = Creates a binary file where 1 is assigned to all values that are not null.
*Con = Performs a conditional if/else evaluation on each of the input cells of an input raster.
*Plus = Combines sum of raster counts.
*Float = Converts each cell value of a raster into a floating point.
*Divide = Divides the values of two rasters on a cell-by-cell basis.

The initial LiDAR file required each of the aforementioned tools to be processed for canopy density. The final results of the canopy density map show the values of 0 and 1. The 0 represents the lowest density while the one represents the highest density of vegetation. 

The information in the density map portrays the difference between the vegetation and ground points. This can be helpful to foresters for conservation planning in tree harvesting, terrain assessment for erosion control based on ground cover, identification and care of trees, and protecting forests as a whole. The density map aids foresters in forest maintenance more efficiently through maps. Remote sensing offers great detail through LiDAR providing foresters with more detailed maps beyond the usual topographic map.

Final Density map



Shenandoah National Park


09 July 2023

M1 Crime Analysis

 For this week’s lab assignment, we performed crime analysis for the Washington, D.C., and Chicago areas. In the first part of the lab, we created choropleth maps using data already provided to us. Performing spatial joins between feature classes gave us results for high burglary rates in the D.C. area. The latter part of the lab consisted of three parts using three different methods of identifying and visualizing hotspots in the city of Chicago. In this part of the lab, we are comparing the techniques in analyzing crime data; Grid Based, Kernel Density, and Local Moran's I. Through completing each of these steps we had to perform a spatial join, select by attributes, create layers from selection, and other geoprocessing steps to get the final output.

After setting up the environment for Chicago, I performed different geoprocessing operations to get the data for the number of homicides occurring in specific areas. The first analysis involved adding the clipped grid cells and the 2017 homicide layers and then spatially joining them. The Grid Based analysis is simple with the results based on a grid overlay of an area of .50 mile cells that are clipped to the Chicago boundary. To find areas with the highest rate of homicide I used the select by attributes to get rates greater than zero and manually counted the top 20%, the top quintile equal to 62. I added a field for these, exported the layer, and used the Dissolve tool for a smooth service area.

The Kernel Density Analysis uses search distance from the radius outward from a point of the highest value. This analysis covers a large area and accounts for outliers. The Kernel Density toolset was used in the analysis of this part. The same data from the Grid Based analysis were used. The toolset parameters needed to be set up with input features, distance, and because rasters are used in this analysis cell size and an output raster needed to be designated. After reclassifying the data for two break values, calculated by multiplying 3 times the mean, and converting the raster to a polygon, I selected attributes having a value of two giving me the final output. The output numbers were different from the Grid Based.  

The Local Moran’s I identifies significant clusters and outliers in data. Using the same data for this analysis the output numbers were different from the previous two. Local Moran’s I used weighted features to get results, the parameters for this were the number of homicides per 1000 housing units for the Chicago area. The geoprocessing tool is Cluster and Outlier Analysis (Anselin Local Moran's I) for calculating spatial analysis statistics for the number of homicides. The output of the analysis gave me four classifications for homicides per household, the high-high cluster result is the only one used for the Dissolve tool setup.

The differences between these after calculating the results are noted in the table below for the numbers calculated. Which is better? I think Kernel Density given that a search from the point with the highest value radiating outward. The calculated result is the magnitude per unit of the cell within the study area so a per unit (cell) from a specified point of high value will give more accurate results.

Local Moran's I map

Kernel Density map

Grid-Based Analysis map


30 June 2023

About me

Hello again. It is amazing how fast time goes. Life can give a person whiplash sometimes, especially with a quick turn on the class schedule. It is always good to see familiar names. My first name is Jerome but I go by Chris. I have just finished GIS Programming in the first half of the summer term. That was one of the toughest classes I have taken since Statistics. I am enrolled in the Master's GIS Admin program which is very exciting but a bit overwhelming at the same time. I have enjoyed my GIS adventure so far. There is always so much to learn. I had courses in GIS many years ago but that was such a long time ago now. 

I live in Pensacola and work from home for the UWF Bookstore full-time. Upon completion of the program, or before, I hope to move over to a GIS job and perform GIA analysis until I am no longer able to.  Meanwhile, I look forward to being able to help in this class when I can. I wish you all the best of luck.

https://storymaps.arcgis.com/stories 

27 June 2023

M6 Working with Geometries

Script results
Text file
ArcGIS output

Pseudocode

Start 

Set up

Prepare to write text file

Create(write) text file

Create variables

Field 1 = Feature/row OID 

Field 2 = Vertex ID 

Field 3 = X coordinate  

Field 4 = Y coordinate 

Field 5 = Name of the river feature 

Create for loop

Close text file 

Delete row and cursor variables outside of all loops

Stop


Flow chart

This week’s assignment was the culmination of the last six weeks of working with scripts. We worked with geometries and covered the topics of Nested loops, Search cursors, For loops, Writing to a TXT file, and Creating loops within a loop. It was a challenge to put together but using information and script templates from previous assignments helped me compile scripts to deliver the results. It is one lengthy script made up of several smaller ones. I had struggles with indentation and syntax specifically when writing the data to the text file which required a series of “for loops”. In the files above you can how the same results are different when they are viewed.


As I said it helps me to keep a notepad of scripts we must use to complete exercises. This week was consistent with previous weeks of reminding me to read through each script to ensure better understanding. This allows me to make mental notes of proper syntax. I created a notepad of scripts from completed exercises before beginning the assignment. This is very helpful in narrowing my focus to the script and helps me to understand it better. For this assignment, I went back to exercise 5’s scripts I saved as notes to get me going. Courses such as this have a way of “forcing “ me to be better organized and to find the greater detail in the details.




UWF Student. Aspiring GIS Analyst.

Key takeaways from a GIS job search

  A job search can be daunting, time-consuming, and frustrating. There are words to add to that short list that are more-or-less synonyms of...