Monthly Archives: February 2017

Assignment 5 – Data Processing and SQL Review

Due Date: Mar 7, 2017

Introduction

  • In this exercise you’ll be working with the copper data tables we processed in class and the gep664 database.
  • This exercise is worth 6 points. Questions 1-5 are worth 1 point each, questions 6-7 are worth 1/2 point each.
  • Be sure to follow the proper conventions for SQL statements and the guidelines for submitting assignments, as posted on the course website. Failure to do so will result in point deductions.

Questions

For the following, provide the SQL statement AND the record count that is automatically returned by pgAdmin or psql (for the tables you import, verify the record count after import).

This assignment assumes that you have already processed and cleaned the copper data tables as we did in class. The original data files are available for download on the course website on the Data & Software page. You will be importing data into the gep664 database.

When importing data into the database you may use the \copy command or the pgAdmin GUI (create blank table to hold data, right click, choose Import).

Remember: 1 – be careful when assigning number types to columns, to insure that you don’t truncate data and 2 – in order for the \copy command or GUI Import to work, the order and number of columns in the database table must match the order and number in the import table.

Part I – Table Creation and Import

For this exercise, create a schema called copper for the user postgres (CREATE SCHEMA copper AUTHORIZATION postgres;) in the gep664 database, and import the copper data tables that we cleaned in class.

1. Create a new database table called smelter and import the copper smelter data.

2. Create a new database table called trade and import the copper ore trade data (you don’t have to do anything with the pivot table we created in class).

3. Create a new database table called mines_porcu and import the main porphyry copper mining data (from main.csv). As a reminder – in class we deleted several columns; these are the only ones that should be in the final table: rec_id, depname, country, ccode, stprov, latitude, longitude, oreton, cugrd, mogrd, augrd, aggrd, deptype, comments.

Part II – Data cleaning and Import

For this part, you’ll work with the main sedimentary copper mine data (main.csv in the sedcu folder) that we did not use in class, but that is stored with the other sample data. It is in a CSV format and is similar to the porphyry table in structure.

4. Using the ISO country code table that we used in class, write an Excel VLOOKUP formula to assign the proper 3-letter ISO country code to each country. For codes not found, fill them in manually. Instead of SQL code, submit the VLOOKUP formula as your answer and comment it out in the template. After the formula write one sentence that indicates which columns the formula is referring to for the lookup value, range, and returned value.

5. Delete unnecessary columns and save the table. Create a new database table in the copper schema called mines_sedcu and import the main sedimentary copper mining data. The columns that should appear in the final table are: rec_id, depname, country, ccode, stprov, latitude, longitude, oreton, cugrd, cogrd, aggrd, deptype, comments. Most of the columns are similar to the porphyry table, but you will need to re-order them so they’re consistent.

Part III – Select and Summarize

For this part, you will create summary views in the copper schema for the data you just imported.

6. Create a view called exports_5yr that sums the total copper ore exports (just exports, not re-exports) by country for the past 5 years, and display just the top 20 countries.

7. Create two separate views, one called pmine_total for porphyry mines and one called smine_total for sedimentary mines, that counts the number of copper mines (a count of records) and sums the total ore in tons available by country. Sort each view by number of mines in descending order.

BONUS Question. Optional, and worth 1 point. Count the mines and sum the ore totals for porphyry and sedimentary sites in one query to create a single view called copper.mines_total. Counts from both tables must appear, even if they don’t have a corresponding match in the other table. You may not change the underlying structure of the tables or create different tables, but (hint) you may import the ISO country code table into the database as is and use it in the query.

Assignment 4 – Modelling and Normalization

Due Date: Feb 28, 2017

Introduction

  • In this exercise you’ll be working with data posted on the course website.
  • This exercise is worth 6 points. The diagram is worth 4 points, the write-up is worth 1 point, and the normalization question is worth 1 point.
  • As this assignment does not require SQL statements, the normal submission guidelines do not apply. Sketch your ER diagram by hand, type up the questions and print them out. Submit both to me as hardcopies at the beginning of class.

Part I – ER Diagram – Modeling GIS Workshops

For this part, you will sketch an entity relationship diagram to model a database to store information about GIS workshops that I teach. Use pencil, ruler, and paper and sketch an ER diagram by hand using Chen’s ER methods that we covered in class and in the readings. If you prefer to use software or online tools to do the sketching you may, but it’s purely optional.

I have placed a spreadsheet on the course website that contains data related to the workshops – on the Data and Software page under Individual class exercises. Use this information and the following description to create the diagram:

Every semester I offer two or three day-long workshops called Introduction to GIS Using Open Source Software. Students, faculty, and staff from throughout CUNY are able to register. Each participant is required to submit their name and email address, and identify their: status, department or field of study, and CUNY school affiliation (for each option they must choose one; i.e. they can't identify multiple fields or schools).

I've held the workshops in a few different rooms at different campuses (but so far, only at Baruch and Lehman). If the rooms are equipped with computers then participants can use them; if the room does not have computers then the participants must bring their own laptops.

At the end of the workshop many of the participants fill out a course evaluation form. This form is anonymous and cannot be tied back to the participant list. For the evaluators all I know is their status, and the status category is more generic than what appears on the participant list.

The goal is to create a database where I can store all of this information in one place, and create queries where I can summarize the data in different ways. Up until now I have used individual spreadsheets, but as I teach these workshops year after year there is no efficient way for me to pull all the data together.

Some pointers:

  • Do NOT worry about the difference between registrants (people who sign up) and participants (people who actually show up). Ignore the registration aspect completely and just look at people who participated.
  • The participant and evaluation data cannot be directly tied together, as evaluations are anonymous. Treat participants and evaluators separately.
  • The evaluation data in the spreadsheet is summarized from individual paper forms that people fill in. For the database model, you can imagine that you have an anonymous record for each individual respondent with their responses.
  • A participant can’t take the workshop more than once.
  • I am the only instructor. There’s no need to store any instructor information.
  • Keep normalization guidelines in mind. If a lot of standardized information would be repeated within an entity, create a separate entity for it.
  • In your diagram you MUST show cardinality and participation, but you do NOT have to show whether an entity or relationship is strong or weak.

Part II – Diagram Write-up

In a short paragraph, summarize how you created the ER diagram. Discuss your decision making process and describe some of the key factors in creating the various entities, relationships, and attributes in the model.

Part III – Normalization

This part is completely separate from the preceding exercise – base your decisions solely on the data provided below. Take the following workshop participant data and put it into 1st, then 2nd, then 3rd normal form. Show each step of the transformation (note – you can keep the person’s name as a single field).

name email department / college workshop_date workshop_room
Darth Vader dv@deathstar.org Economics, Baruch 9/30/2016 951
Eleanor Roosevelt omg@gmail.com Marketing, Baruch 10/31/2016 322
Peter Parker spidey@yahoo.com Criminal Justice, John Jay 9/30/2016 951
Genghis Khan gkhan@aol.com Geography, Lehman 10/31/2016 322
Roy Rogers kingcowboy@gmail.com Geography, Lehman 10/31/2016 322
Eleanor Roosevelt omg@gmail.com Marketing, Baruch 10/31/2016 322

Assignment 3 – Intro to SQL DDL

Due Date: Feb 21, 2017

Introduction

  • In this exercise you’ll be working in the nyc and nys schemas within the gep664 database.
  • This exercise is worth 6 points. Each question is work 1/2 a point.
  • Be sure to follow the proper conventions for SQL statements and the guidelines for submitting assignments, as posted on the course website. Failure to do so will result in point deductions.

For the following, provide the SQL statement that answers each question; provide the row count only if asked. There may be more than one way to answer certain questions (just choose one).

Part I – Weather Stations table (nyc schema)

In order to select weather stations by state, we currently have to use the LIKE statement to do pattern searching. This is extremely inefficient; let’s add a dedicated state column that will allow for more efficient queries.

1. Alter the weather stations table by adding a column called state that will hold the two-letter state code (i.e. NY, NJ).

2. Update the new state column by adding the two letter state code that indicates where the station is located. You can do this using either one or two statements; you may not insert the codes manually (i.e. by typing each one in individually).

Part II – NYC Borough Data (nyc schema)

You’ll get some practice in table creation by adding a couple of tables with data for the five boroughs of NYC. Each of the boroughs is also considered a county; a borough is a local designation for subdivisions of the City of New York, while counties are a federal designation for subdivisions of states.

3. Create a table called nyc.borough_area to hold the following data: codes and names for the boroughs, and their area in water and land in square miles (for column names, use the names in question 4). Designate a primary key, specify that borough names should not be null, and add a check constraint that land area must be greater than zero.

4. Insert the data into the new table

bcode,bname,area_wtr,area_lnd
bx,Bronx,15.38,42.10
bk,Brooklyn,26.10,70.82
mn,Manhattan,10.76,22.83
qn,Queens,69.68,108.53
si,Staten Island,43.92,58.37

5. Create a view called nyc.borough_sum that has the borough name and calculated fields that have the total land area, the percent total that is water, and the percent total that is land. Optionally – you can round the percentages to whole numbers.

6. Create a table called nyc.counties_pop to hold the county code, county name, borough code, and population for 2010 (for column names, use the names in question 7). Make county code the primary key and borough code the foreign key.

7. Insert the data into the new table

ccode,cname,bcode,pop2010
36005,Bronx County,bx,1385108
36047,Kings County,bk,2504700
36061,New York County,mn,1585873
36081,Queens County,qn,2230722
36085,Richmond County,si,468730

8. Create a new table called nyc.boroughs_pop from the borough area and county population tables that contains: the borough code as primary key, borough name, total population, and population density (number of people per sq mile of land).

Part III – Import Labor Force Table (nys schema)

We’ll import a table from the Census 2010-2014 American Community Survey that indicates whether workers work in the same municipality in which they live. There is one record for each ZCTA in New York State, and like the previous ACS tables we’ve worked with each estimate column has a margin of error column associated with it.

Go to the Data tab on the course website and download the CSV file for place of work by MCD of residence. This CSV has a header row and uses commas for delimiters.

9. Create a table called nys.acs2014_mcdwork that will accommodate all of the data in the CSV file. Make sure to designate a primary key.

10. Use the \copy command to insert the data from the CSV file into the table. Once it’s imported, write a select statement to verify the results. Provide the copy command, the select statement, and the row count (returned by pgAdmin) in your answer.

Part IV – Written Responses

Please answer the following questions in two to four sentences.

11. What is the difference between a primary key and a foreign key?

12. What is a view and how is it different from a table? For what purpose would you create a view?

Assignment 2 – Intro to SQL DML

Due Date: Feb 14, 2017

Introduction

  • In this exercise you’ll be working in the nys schema within the gep664 database.
  • This exercise is worth 6 points. Each question is work 1/2 a point.
  • Be sure to follow the proper conventions for SQL statements and the guidelines for submitting assignments, as posted on the course website. Failure to do so will result in point deductions.

Background

In this exercise you’ll be working in the nys schema within the gep664 database. This schema includes four tables with census data that describe the economy and labor force in NY State.

nys.acs2014_labforce
resident labor force participation and employment from the 2010-2014 American Community Survey
nys.zbp2014
number of business establishments with employees and payroll from the 2014 ZIP Code Business Patterns
nys.zbp_flags
codes and descriptions for footnotes in the ZBP table
nys.zips
list of all ZIP Codes, with their postal cities and the ZCTA that each ZIP is correlated with (for residential ZIPs) or located in (for non-residential ZIPs)

The ACS data on the labor force is reported by ZIP Code Tabulation Area (ZCTAs), while the ZBP data on business establishments is reported by ZIP Code. What’s the difference?

Even though we think of ZIP Codes as areas, they are not. ZIP Codes are identifiers assigned by the US Postal Service to addresses along street segments. The Census Bureau, as well as private agencies and services like Google maps, will take this data and attempt to create areas based on concentrations of addresses that share the same ZIP. The Census Bureau does this by aggregating census blocks where the majority of addresses share the same ZIP Code; the resulting areas are called ZCTAs. Some ZIP Codes cannot be aggregated into areas, either because they represent one large organization or building that has its own ZIP, or because the code represents a large cluster of PO Boxes at a post office. As a result, there are more ZIP Codes than ZCTAs.

In our database, the ACS labor force data is reported by ZCTA, while the ZBP business establishment data is summarized by ZIP Code. You can aggregate ZIP Code data to the ZCTA level by assigning the non-residential ZIPs to the ZCTA where they are physically located. The zips table in our database allows you to do this, as each ZIP Code is related to a ZCTA.

Lastly, the data in the ACS tables represent estimates (at a 90% confidence interval) with margins of error. So for every estimate there are two columns: the estimate itself, and a margin of error for the estimate. The latter are stored in columns with the suffix _me. For example the labor force (labforce) for ZCTA 10001 was 14,547, plus or minus 947 (labforce_me).

Questions

For the following, provide the SQL statement and the record count that is automatically returned by pgAdmin or psql. There may be more than one way to answer certain questions (just choose one).

Part I – ZIP Code Business Patterns 2014 (ZBP) Table

1. Select all ZIP Codes with all columns that have more than 5000 employees.

2. Select all ZIP Codes in the Bronx that have more than 5000 employees.

3. Calculate the average number of employees for the entire table in a column called avg_emp.

4. For all ZIP Codes calculate the average payroll for employees in a new column called avg_pay. Sort the data and display the top ten ZIPs. Note: the payroll data is rounded to 1,000s of dollars, so multiply the result by 1,000 to get the right value. Make sure to exclude records where the employee value is null.

Part II – ZBP 2014 and USPS ZIP Code (ZIPs) Tables

5. Do a regular join between the zbp table and the zips table, to show which ZIPs are affiliated with which ZCTAs. Select just the zipcode and zipname from the zbp table and zcta from the zips table, and use aliases for the table names.

6. Do a left-outer join from the ZIP table to the ZBP table to identify which ZCTAs have no matching zipcode data. Sort the data in descending order by zcta. Select just the zipcode and zipname from the zbp table and zcta from the zips table, and use aliases for the table names.

7. Go back and do a regular join between the zbp table and the zips table. Aggregate and sum the establishments and employees from ZBP table by zcta. Use aliases for the table names and for the new columns.

Part III – American Community Survey (ACS) Table

8. Calculate the unemployment rate – the unemployed divided by total civilian labor force (lab_civilian) – for each ZCTA in a new column called unemp_rate. Do this for records where the number of unemployed is greater than 0. Since these columns are integers and you need a decimal result, use the cast operator:

(cast(lab_civunemp as numeric)/ cast(lab_civilian as numeric))*100

9. Select all records where the margin of error (_me) for the unemployed population is less than the unemployed population.

10. Now, calculate the unemployment rate for every ZCTA where the margin of error for the ZCTA’s unemployed population does not exceed the unemployed population.

Part IV – Written Responses

Please answer the following questions in two to four sentences.

11. What are the benefits for using aliases (with the AS statement) for column names and table names?

12. What is the difference between a regular (inner) join and a left-outer join? For what purpose would you use a left-outer join?