30 August 2023

Lab 1 M1.1 Fundamentals

 This is the first week of Special Topics in GIS and looks to be a class of new challenges with a different approach to data. However, after all, it is still data analysis and maps. That is what makes GIS fun and challenging to me. The title for this lab is Calculating Metrics for Spatial Data Quality. We learned the importance of accuracy versus precision and how one does not guarantee the other. Today’s cell phones and GPS units are quite accurate for the most part. They can be used to map data but should not be used without determining their accuracy and precision for horizontal and vertical positional accuracy and precision.



For this lab, we added shapefile data from the UWF repository that included waypoints and a reference point shapefile. We determined the precision and accuracy measurements of 50 waypoints that were mapped with a handheld Garmin unit with an accuracy of less than 15 meters, according to the unit owner’s manual. The points were sporadic at best with the actual location point being unclear. Once added to ArcGIS we created buffers of 1, 2, and 5 meters around an average waypoint location to make it simpler to map. After that was established we created three buffers that represented the 50th, 68th, and 95th percentile of the data. The 68th percentile was the most common representation of data. After this was completed we added the reference point shapefile that was mapped using a Trimble Pathfinder GPS unit with an accuracy of less than one meter if set up correctly, according to the manufacturer’s specifications.

Great! Now what? We have a map created and we need to make sense of the data. For this, we used Root-mean-square error (RMSE) and cumulative distribution function (CDF) to determine the accuracy and precision of the results. For this part, we used MS Excel, which was a bit of a challenge when trying to figure out the formulas to get the final results for the CDF. It was a large dataset to work with of x and y points, 200 to be exact. It helps to have some knowledge of Excel when doing this. We calculated the RMSE of the average between the x and y values which we derived from the coordinates. Once the results were determined we graphed them in a CDF, where the relationship between cumulative percentage and the errors in xy were shown in a scatterplot. The CDF helps with visualizing the results. 

XY Coordinate errors and percentages

Error metrics for the GPS positions

The main point of our lab this week is determining accuracy and precision. Both can be confusing and may seem interchangeable but they are not. According to Bolstad in GIS Fundamentals, “accuracy measures how close an object is to the true value.” He defines precision as “how a dispersed a set of repeat measurements are from the average measurement.” In these definitions, you can see how related they might be but still different. The introduction of bias is not something to overlook in data and will certainly affect accuracy and precision. It could be measured in the wrong coordinate system or faulty equipment. 


11 August 2023

M6 Lab(II) Suitability Analysis / Least Cost Path Analysis

This week’s lab was a GIS challenge of challenges. Tough! It was the sheer length of the lab coupled with being unfamiliar with some of the geoprocessing tools while ensuring the correct parameters and inputs were set. Data not found after saving a completed map, even though the words echo when working in ArcGIS, “save and save often”. Yes, I saved and saved often. Several times ArcGIS crashed when running analysis tools or loading or adding data. I have had issues in the past when working in ArcGIS but this is above the rest with problems. Frustrating but it is what it is and there was much to be done.

Once again the data were provided for us, always helpful and I am grateful for that. Through this analysis, I will create map results for suitability and least cost analysis. There were several tools at our disposal as with ArcGIS there is usually more than one option to get the desired results. There were four scenarios with goals to achieve through different analyses. 

As we progressed through the lab it was common to refer back to previous steps for reminders of how to prepare and analyze the data. The first scenario was preparation for the rest of the lab work as we worked to complete four total scenarios. The first step in a study is to state the problem. In doing this, you can break down the problem, establishing what data will be needed to complete the analysis. I used ModelBuilder in ArcGIS for Boolean Suitability in creating a flowchart. ModelBuilder helps present the visual of the data in one place in a clean and neat format. The primary data for this lab in each scenario was elevation, land cover, roads, and rivers. In each scenario the tool Reclassify was used to set values for the output. In completing the reclassification of data Boolean Suitability was applied, either it fits the criteria or it does not. Once completed the lab would offer 30 deliverables to turn in and along the journey, multiple geoprocessing tools would be used to create the deliverables.


ArcGIS
ModelBuilder Boolean Suitability

Scenario 1 - Identify cougar habitat and create a suitability map of cougar behavior

Data used: Slope (elevation data), land cover, roads, and rivers


Scenario 2 - A property developer wants an estimate of how much land would actually be able suitable to build on. Identify suitable locations for a developer to build on. 

Data used: Land cover, soils,  slopes,  roads, and rivers


Scenario 3 - An oil company wants to install a pipeline and needs analysis on an area accounting for slope, river crossing, and river proximity.

Data used: Elevation–a DEM of the study area, Rivers–a vector version of rivers in the study area, Source–the origin of the pipeline, Destination–the destination of the pipeline, and Study area–polygon of the study area


Scenario 4 - Create a model of the potential movement of black bears between two protected areas, creating a corridor.

Data used: Coronado1 and Coronado2 – these are two existing areas of the Coronado National Forest, Elevation – elevation model for the study area,  Landcover – land cover raster for the study area; the categories are described in the landcover_codes.dbf 

File, and Roads – shapefile of roads in the study area


I created a corridor on the map but it is not visible due to technical difficulties. I reran the tools and it only seemed to get worse. It connects the two areas of Coronado National Forest which allows for the black bear to travel safely through the corridor between the national forests.



These scenarios offered a progression in necessary skills that involved much reading of ArcGIS help with details of how best to use a tool to achieve the best results. It can be a tedious and time-consuming process when you work and attend online school at the graduate level. It is also part of the learning process and aids in creative thinking to devise the best way to analyze the data. The ModelBuilder for Boolean Suitability set up at the beginning of the lab did help me visualize the analysis and what the outcome should be. I need more work with it but I see how it can be a timesaver in offering results more efficiently. For each scenario afterward I did not use the ModelBuilder. Reclassifying the data was involved in each scenario which can be a Boolean setup answering yes or no, true or false. 

The tools involved throughout the lab are Reclassify, Euclidean Distance, Slope, Buffer, Cost Distance, Cost Path, and Weighted Overlay. At times it was necessary to get results through an assortment of other tools because with GIS there are options. If I cannot figure out a way I consult help which oftentimes mentions other tools to achieve the same results.




M6 Lab(I): Suitability Analysis / Least Cost Path Analysis

 Weighted Overlay Tool-Task=Use to find a suitable area to build for a property developer.


The Weighted Overlay tool is a commonly used approach in solving multicriteria problems for suitability problems. In module 6 I worked through a scenario that involved identifying suitable locations for a property developer to build on. They need an estimate of how much land would actually be suitable to build on. The data used for this task is land cover, soils,  slopes,  roads, and rivers. The weighted overlay tool is ideal to use in this situation where rasters are the input and output. To get the data to use I ran the Euclidean Distance for the roads and rivers. These inputs required a suitability rating based on distance. Hence the Euclidean tool which gives the results in raster form. Rasters for each data input are what I needed to run the weighted overlay. Once the roads and rivers are changed into rasters, each of the criteria was reclassified into ranges and categorized. Soils were put into a class and landcover was sorted by value. Now I can run the weighted overlay using a raster file as an input, essentially adding all these together to get one result. The results are in the map comparison below. 


The raster inputs require evenness of being 100% total so they were each given a 20% weight across the 5 layers. An unequal weighted overlay is simply offsetting the percentages given to each input, in this case, the slope was given 40%. To account for the offset roads and rivers were decreased to 10% each as their weighted input.  I was unable to attain the lab assignment’s value number of 4. As you can see there were only three for my output. I ran this over and over but still was only able to come up with these results. At some point, the map simply stopped working and caused me to go back to the beginning of this segment to start over, at least twice. 

05 August 2023

M5 Damage Assessment

 This week was about storm assessment and the determination of the damage. We were provided the data to work with which consisted of a small area on the Ocean County, New Jersey coastline similar to last week’s coastal area of New Jersey. This was a fun lab to work through. Working with data, creating maps using aerial imagery of an area affected by a hurricane reemphasizes how destructive they can be. Although they do not happen each year hurricanes are common enough that you learn to appreciate the damage one can do. It is never an ideal situation to go through and an experience you will never forget. 

The initial data was provided but still required processing. To do this several geoprocessing tools were used to create and analyze the data. These included creating mosaic datasets and adding rasters to them, and creating and setting a domain with a feature dataset. After the layers were created there was editing to be done, digitized points for the buildings, and a digitized polyline created for the coastline. These data are similar to what FEMA uses in post-disaster recovery efforts. The before and after imagery was provided, this was added to the map and used for damage assessments to the buildings. It was eye-opening when using the aerial imagery, being able to see inundated areas where the storm surge pulled the sand from the beaches onto the streets overwhelming the buildings nearest the coastline. 



Analysis results with attribute table


UWF Student. Aspiring GIS Analyst.

Key takeaways from a GIS job search

  A job search can be daunting, time-consuming, and frustrating. There are words to add to that short list that are more-or-less synonyms of...