Monthly Archives: March 2019

Assignment 8 – Spatial Relationships

Due date: Apr 2, 2019

Introduction

  • In this exercise you’ll be working with the gep664 database in the nyc schema. It’s assumed that you have also loaded the ZCTA shapefile into this schema (which we did in our previous class). You’ll also need to use QGIS.
  • This exercise is worth 8 points. Each question is worth 1 point each.
  • Be sure to follow the proper conventions for SQL statements and the guidelines for submitting assignments, as posted on the course website. Failure to do so will result in point deductions.
  • For this assignment you will need to submit a SQL file and a map as an image file. When you submit your electronic assignment to the Box, attach the SQL file and the image as attachments in the same message.

Questions

This assignment will give you additional practice with spatial relationships and joins, and you’ll pull together several concepts that you have learned thus far in the course.

You’ll be working with data in the nyc schema; this includes the ZCTA feature that we loaded from a shapefile in our previous class. If you don’t have the ZCTA file download it from the course website (data files used for Class 7) and load it into the gep664 database using the PostGIS shapefile loader or the database manager in QGIS. It’s in NAD83 (4269). After you load it, you’ll need to transform it to NYSP (2263) before you can use it. Refer to the in-class notes for class 7 for the steps we took to do this.

For the following, provide: the SQL statement, the row count, and any specific deliverable that the questions asks for.

Part I -Spatial Relationships

There are 339 census tracts and 25 ZCTAs in the Bronx. ZIP Codes / ZCTAs often do not align well with other census or legal geographies. In this exercise you’ll test and see how well they do or do not align with census tracts. Pay careful attention to the relationship you’re being asked to test. Use the census_tracts and zctas layers.

1. Select all census tracts (the tractid and tractnum) that are within ZCTAs in the Bronx.

2. Select all census tracts (the tractid and tractnum) that overlap ZCTAs in the Bronx. Use DISTINCT to remove tracts that overlap more than one ZCTA.

3. Select all census tracts (the tractid and tractnum) that have their geographic centroid within ZCTAs in the Bronx.

4. Modify your statement in question 3 to select just the tracts in ZCTA 10468. Create an additional field where you calculate the total area of the census tracts (i.e. the area of the entire tract, regardless of whether the area is inside or outside the ZCTA).

Part II – Spatial Join

5. Write a SELECT statement where you select the subway stations (stop id, name, and trains) for the entire city and assign the stations the ZCTA code that they are within. For this statement, use the JOIN clause and make it a left outer join so that you keep all subway records on the left whether they have a matching ZCTA or not. Sort the result in decending order by ZCTA number.

6. Create a new spatial table called nyc.bedford_subways that contains all the subway stations in ZCTAs 10458 and 10468. The table should contain: stop id, name, trains, ZCTA, and subway geometry. Be sure to assign stop_id as the primary key. Create the table using spatial relationships, not by manually selecting stations yourself. Then create a spatial index for the new layer. Provide all SQL statements in your answer.

Part III – Spatial Joins and Summaries

To refresh your memory when answering these questions, refer back to the slides and the course exercises. When writing these longer statements, construct them bit by bit, getting one piece to work before moving on to add the next piece.

In the nyc schema in the course database:

  • census_tracts is a spatial table with census tract geometry; it also contains a column that identifies the Neighborhood Tabulation Area (NTA) that each tract is part of.
  • census2010_tracts contains data from the 2010 Census for each census tract.
  • census2010_lookup contains the list of census variables and the codes used to identify them as column headings in the census2010_tracts table.

7. Identify the codes for the variables that represent the total population, and the total population who are 16 years old and over. Then write a SELECT query where you relate these variables to the census tract geographies and create a summary for NTAs, so each row in the result is for an NTA and the columns represent: total population, total population 16 and over, and the percent of the NTA’s population that is 16 and over for each NTA. To avoid division by zero, select only tracts that have an over 16 population greater than zero. In order to get decimals for your percent total, remember to use the CAST operator around your integers:

(CAST(value1 as numeric))/(CAST(value 2 as numeric))

8. Take the query you wrote in question 8 and save it as spatial view called ntas_workforce (remember to do this, you must add the geometry to the select clause and union it). Provide this query as your answer. Then, using QGIS connect to the course database and add this view to the map. Symbolize the NTAs to display a graduated map showing the percent of the population who are 16 and over. Classify the data using equal intervals with 5 categories. It’s normal if you have some “holes” in your map, as these represent unpopulated areas. You don’t have to create a fancy, finished map (with a layout etc) – just take a screenshot of the window and save the result as an image file (under Project menu – Save as Image). When naming the file, use the same convention as the sql template (i.e. use your name and the assignment number).

Assignment 7 – Spatial Reference Systems

Due date: Mar 26, 2019

Introduction

  • In this exercise you’ll be working with the gep664 database in the copper schema you created from assignment 5. You’ll also need to use QGIS.
  • This exercise is worth 6 points. Each question is worth 1 point each.
  • Be sure to follow the proper conventions for SQL statements and the guidelines for submitting assignments, as posted on the course website. Failure to do so will result in point deductions.
  • For this assignment you will need to submit a text SQL file and a map as an image file. When you submit your electronic assignment to the Box, attach the SQL file and the image as attachments in the same message.

Questions

This assignment will give you additional practice with adding geometry columns, loading spatial data, and transforming coordinate systems.

You’ll be working with data in the copper schema; if you don’t have the files the entire schema is available as a backup file on the course website. Download the course data for class 7 on the Data and Software page, which contains the copper schema backup and a country shapefile that you’ll need to load.

Provide the SQL statements and any specific deliverable that each question asks for.

Part I – Data Loading and Transformation

1. Add a geometry column to the copper smelters table, and build geometry for the table using the longitude and latitude fields. These coordinates are currently in WGS 84 (SRID 4326). When building the geometry, transform them to Pseudo Mercator (SRID 3857).

2. In the course data files there is a shapefile of countries from Natural Earth. Import this shapefile into the copper schema using the PostGIS shapefile loader, located under the Start menu under the heading PostGIS bundle. Name it countries_temp and assign it the appropriate SRID (it is in WGS 84). Then write a SQL statement to SELECT the iso_a3 and name columns from the table. Provide that statement AND the row count as your answer.

Note – if you receive an error about UTF-8 encoding while trying to import, hit the options button and change the encoding from UTF-8 to LATIN1. If you are using a Mac and don’t have the PostGIS shapefile loader, use the DB Manager in QGIS instead to import the shapefile.

3. Create a new table called countries_bndy that will hold a cleaned-up version of countries_temp. Use the new column names (1st column in the list below) and look at the existing columns (2nd column in the list below) in the countries_temp table to assign appropriate data types. Make uid the primary key, and make sure to include a geometry column in the new table called geom that can hold multipolygons and assign it the SRID for Pseudo Mercator.

uid - adm0_a3
iso - iso_a3
name - name
name_long - name_long
ctype - type
continent - continent
subregion - subregion

4. Insert the appropriate data from countries_temp into the countries_bndy file, and as part of the insert operation transform the geometry from WGS 84 to Pseudo Mercator.

5. Create a view called smelter_sum where you join the smelters table to the countries_bndy file using the iso country codes in each table. In the view, group the data by country, count the number of smelters, and sum the smelter capacity. Make sure to include the geometry column (in SELECT and GROUP BY). In your answer, return the SQL statement AND the number of rows that are in the view.

Part II – Make a Map

6. Using QGIS, connect to the course database and add the smelter_sum view to the map. Symbolize the countries to display a graduated map showing total smelter capacity. Classify the data using equal intervals with 5 categories. You don’t have to create a fancy, finished map (with a layout etc) – just take a screenshot of the window and save the result as an image file (under Project menu – Save as Image). When naming the file, use the same convention as the sql template (i.e. use your name and the assignment number).

Need help using QGIS?

  • Consult the course handout on Readings and Docs for accessing PostgreSQL and PostGIS databases (there’s a section on QGIS)
  • Take a look at the QGIS user documentation
  • Look at my workshop manual
  • Search Youtube and the GIS Stack Exchange
  • Ask your classmates or the lab tutors

Midterm Quiz

Takes Place: Apr 2, 2019

The midterm quiz will take place at the beginning of class on Tue Apr 2nd. Make sure to arrive on time for class.

The quiz is worth 10 points (each question is worth 1 point) and will consist of two parts:

Part I – Definitions

Of these 7 terms, 5 will appear on the quiz. You will choose 3 to define in 4-6 complete sentences. Your answers must address the primary meaning of the terms, with some supporting details. You may not bring any notes (test is closed-book). Look at these example definitions to see what a full, partial, and no credit answer would look like.

  • Data type
  • Geometry type
  • Normalization
  • Primary key
  • Schema
  • Spatial Reference System
  • View

Part II – SQL

You will be given print outs of 2 sample tables. There will be 7 questions where you are asked to write a SQL statement based on these tables. The material in this part covers just the fundamentals from classes 2 & 3, except there will be 1 question about adding geometry columns. Your statements must follow the standard SQL style guidelines.

You will be given a SQL reference sheet (this sheet here) that you can refer to throughout the test, but some questions may include material that is not on the sheet.

Assignment 6 – Spatial Data Basics

Due date: Mar 19, 2019

Introduction

  • In this exercise you’ll be working with the gep664 database and the sample data from Chapter 1 in PostGIS in Action.
  • This exercise is worth 6 points. Each question is worth 1 point each.
  • Be sure to follow the proper conventions for SQL statements and the guidelines for submitting assignments, as posted on the course website. Failure to do so will result in point deductions.

Questions

The first part of this assignment echoes what we did in class and covers material from PostGIS in Action Chapter 2.

The second part of this assignment is derived from Chapter 1 in PostGIS in Action. It assumes that you have read the chapter and did all of the exercises using the sample data provided with the book.

For the following, provide: the SQL statement (questions 1-6) and the record count (questions 4-6) that is automatically returned by pgAdmin.

Part I – Geometry Basics

For this exercise, you’ll be working in the course database gep664 in the nyc schema.

For the questions in this part, all of the geometry should be defined using SRID 2263, which is the local state plane zone for NYC (NAD 83 NY State Plane Long Island (ft-us)).

1. Create a point feature to represent the city’s primary train stations. Create an empty table called nyc.train_stations with 3 columns: tid integer, name text, and geom geometry(point,2263). Write an INSERT statement where you insert the VALUES for id, name, and point geometry into the table. Use the manual method for constructing geometry using ST_GeomFromText. In your answer, just provide the final statement (the INSERT).

1,Penn Station,986029,212733
2,Grand Central Station,990607,213615

2. Create a line feature to represent the Times Square Shuttle subway. Create an empty table called nyc.subway_shuttle with 3 columns: sid varchar(1), name text, and geom geometry(linestring,2263). Look in the subway stations table and find the stops for Grand Central (stop_id 901) and Times Square (stop_id 902). Return the geometry from these stations as text so you can see the coordinates (they’re in NY State Plane). Using these coordinates, write an INSERT statement where you insert: an id (S), name (Times Square Shuttle) and linestring geometry into the nyc.subway_shuttle table. Use the manual method for constructing geometry using ST_GeomFromText. In your answer, just provide the final statement (the INSERT).

3. Create point features for the NJ PATH stations in Manhattan. Create an empty table called nyc.path_stations with 5 columns: sid integer, name text, xcoord numeric(6), ycoord numeric(6), and geom geometry(point,2263). Write an INSERT statement to insert the data into the table. Then use the ST_SetSRID and ST_Point functions with an UPDATE statement to build geometry (do not use the manual record by record methods you used for the previous questions). In your answer provide the INSERT statement AND the statement used for creating the geometry.

1,33rd Street,987455,211986
2,23rd Street,986185,209902
3,14th Street,985064,207918
4,9th Street,984618,206718
5,Christopher Street,982237,206324
9,World Trade Center,981276,198593

Part II – PostGIS in Action Chapter 1

For this part, you’ll work with the sample data from Chapter 1 of PostGIS in Action. This part assumes that you read and completed the chapter’s exercises.

For the exercises in chapter 1:

A. Download the data for PostGIS In Action from the Data and Software page.

B. In section 1.4.3 you can use either the COPY command or pgAdmin to import files.

C. In section 1.4.3 to load shapefiles use the shapefile loader listed under the PostGIS tools in your start menu. Alternatively, you can load shapefiles using the DB Manager in QGIS.

D. In section 1.4.5 do NOT install OpenJump. You can use QGIS if you wish to visualize your results. See the course handout Accessing PostgreSQL and PostGIS Databases on the Readings and Docs page for details.

Once you’ve finished the chapter, answer the following questions. Your answers must include the SQL statement AND the row count.

4. Count all of the restaurants that are within 1/2 mile of a highway within the states of NY, NJ, and CT. Group by franchise, so the result shows the total count of each restaurant within all three states (i.e. do not group by state).

5. Return the coordinates in plain text of all Pizza Hut restaurants in the restaurants table.

6. Calculate and sum the lengths of all the principal highways in miles within the state of NJ by highway name. Since many of the highways exist as multiple features be sure to group them. Sort the results from longest to shortest distance.

Assignment 5 – Data Processing and SQL Review

Due date: Mar 12, 2019

Introduction

  • In this exercise you’ll be working with the copper data tables we processed in class and the gep664 database.
  • This exercise is worth 8 points. Each question is worth 1 point each.
  • Be sure to follow the proper conventions for SQL statements and the guidelines for submitting assignments, as posted on the course website. Failure to do so will result in point deductions.

Questions

For the following, provide the SQL statement AND the record count that is automatically returned by pgAdmin (for the tables you import, verify the record count after import).

This assignment assumes that you have already processed and cleaned the copper data tables as we did in class (since we didn’t finish, I’ve posted 3 of the 4 finished tables in copper_working.xlsx on the Software and Data page). The original data files are also available for download. You will be importing data into the gep664 database.

When importing data into the database you may either use the pgAdmin GUI (create blank table to hold data, right click, choose Import) or the COPY command (create blank table to hold data, write SQL COPY statement). The safest approach is to export spreadsheet files out as CSV files (using Save As – and select comma-delimited format) and then import CSVs into the database.

Remember: 1 – be careful when assigning number types to columns, to insure that you don’t truncate data and 2 – the order and number of columns in the database table must match the order and number in the import table, otherwise the import will fail. If you receive an encoding error when trying to import any table, try setting the encoding to WIN1252 in the import screen.

Part I – Table Creation and Import

For this exercise create a schema called copper for the user postgres (CREATE SCHEMA copper AUTHORIZATION postgres;) in the gep664 database. Then create well-structured tables with appropriate data types and keys, and import the copper data that we cleaned in class into the tables. Provide the CREATE TABLE statements and row counts.

1. Create a new database table called smelters and import the copper smelter data.

2. Create a new database table called trade and import the copper ore trade data.

3. Create a new database table called mines_porcu and import the main porphyry copper mining data. As a reminder – in class we deleted several columns; these are the only ones that should be in the final main table: rec_id, depname, country, iso, stprov, latitude, longitude, oreton, cugrd, mogrd, augrd, aggrd, deptype, comments.

Part II – Data cleaning and Import

For this part, you’ll work with the sedimentary copper mine table that we did not use in class, but that is stored with the other sample data in a folder called sedcu. It is in a CSV format called main.csv and is similar to the porphyry table in structure. Import it into a spreadsheet.

4. Using the ISO country code table that we used in class (country_codes.csv), write an Excel VLOOKUP formula to assign the proper 3-letter ISO country code to each country. For codes not found, fill them in manually. Instead of SQL code, submit the VLOOKUP formula as your answer and comment it out in the template. After the formula write one sentence that indicates which columns the formula is referring to for the lookup value, range, and returned value.

5. Delete unnecessary columns and save the table. Create a new database table in the copper schema called mines_sedcu and import the main sedimentary copper mining data. Provide the CREATE TABLE statement and row count. The columns that should appear in the final main table are: rec_id, depname, country, iso, stprov, latitude, longitude, oreton, cugrd, cogrd, aggrd, cuton, deptype, comments. Most of the columns are similar to the porphyry table, but you will need to re-order them so they’re consistent.

Part III – Select and Summarize

For this part, you will create summary views in the copper schema for the data you just imported. Hint: experiment with writing your SELECT statement first to get it working, then once it’s right, create the view.

6. Create a view called exports_5yr that sums the total copper ore exports by country for the past 5 years, and display just the top 20 countries by weight exported.

7. Create a view called import_smelt that sums the capacity of smelters by country and that shows the amount of copper ore imported into that country in 2015, for countries that have smelters and imports. The capacity of the smelters is in thousands of metric tons (TMT), while the weight of imports is in kilograms. 1,000 metric tons = 1,000,000 kilograms. Divide the imported ore by a million to get the amount of ore in TMT, and sort the data by country.

8. Create two separate views, one called pmine_total for porphyry mines and one called smine_total for sedimentary mines, that counts the number of copper mines (a count of records) and sums the total ore in tons available by country. Sort each view by number of mines in descending order.

BONUS Question. Optional, and worth 1 point. Count the mines and sum the ore totals for porphyry and sedimentary sites in one query to create a single view called copper.mines_total. You may not change the underlying structure of the tables or create different tables, but you may import the ISO country code table into the database as is and use it in the query. I have identified three possible solutions to this question: can you figure out one of them?