Skip to content
Permalink
24560cf293
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Go to file
 
 
Cannot retrieve contributors at this time
482 lines (352 sloc) 23.1 KB
---
title: "Mars 2020 Mission Data Notebook:"
subtitle: "DAR Assignment 2"
author: "Tianyan Lin"
date: "`r format(Sys.time(), '%d %B %Y')`"
output:
pdf_document: default
html_document:
toc: true
number_sections: true
df_print: paged
---
```{r setup, include=FALSE}
#Required R package installation; RUN THIS BLOCK BEFORE ATTEMPTING TO KNIT THIS NOTEBOOK!!!
#This section install packages if they are not already installed.
# This block will not be shown in the knit file.
#knitr::opts_chunk$set(echo = TRUE)
# Set the default CRAN repository
local({r <- getOption("repos")
r["CRAN"] <- "http://cran.r-project.org"
options(repos=r)
})
if (!require("kableExtra")) {
install.packages("kableExtra")
library(kableExtra)
}
if (!require("pandoc")) {
install.packages("pandoc")
library(pandoc)
}
if (!require("reshape2")) {
install.packages("reshape2")
library(reshape2)
}
# Required packages for M20 LIBS analysis
if (!require("rmarkdown")) {
install.packages("rmarkdown")
library(rmarkdown)
}
if (!require("tidyverse")) {
install.packages("tidyverse")
library(tidyverse)
}
if (!require("stringr")) {
install.packages("stringr")
library(stringr)
}
if (!require("ggbiplot")) {
install.packages("ggbiplot")
library(ggbiplot)
}
if (!require("pheatmap")) {
install.packages("pheatmap")
library(pheatmap)
}
if (!require("randomForest")) {
install.packages("randomForest")
library(randomForest)
}
```
# DAR ASSIGNMENT 2 (Introduction): Introductory DAR Notebook
This notebook is broken into two main parts:
* **Part 1:** Preparing your local repo for **DAR Assignment 2**
* **Part 2:** Loading and some analysis of the Mars 2020 (M20) Datasets
* Lithology: _Summarizes the mineral characteristics of samples collected at certain sample locations._
* PIXL: Planetary Instrument for X-ray Lithochemistry. _Measures elemental chemistry of samples at sub-millimeter scales of samples._
* SHERLOC: Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals. _Uses cameras, a spectrometer, and a laser of samples to search for organic compounds and minerals that have been altered in watery environments and may be signs of past microbial life._
* LIBS: Laser-induced breakdown spectroscopy. _Uses a laser beam to help identify minerals in samples and other areas that are beyond the reach of the rover's robotic arm or in areas too steep for the rover to travel._
* **Part 3:** Individual analysis of your team's dataset
* **Part 4:** Preparation of Team Presentation
**NOTE:** The RPI github repository for all the code and data required for this notebook may be found at:
* https://github.rpi.edu/DataINCITE/DAR-Mars-F24
# DAR ASSIGNMENT 2 (Part 1): Preparing your local repo for Assignment 2
In this assignment you'll start by making a copy of the Assignment 2 template notebook, then you'll add to your copy with your original work. The instructions which follow explain how to accomplish this.
**NOTE:** You already cloned the `DAR-Mars-F24` repository for Assignment 1; you **do not** need to make another clone of the repo, but you must begin by updating your copy as instructed below:
## Updating your local clone of the `DAR-Mars-F24` repository
* Access RStudio Server on the IDEA Cluster at http://lp01.idea.rpi.edu/rstudio-ose/
* REMINDER: You must be on the RPI VPN!!
* Access the Linux shell on the IDEA Cluster by clicking the **Terminal** tab of RStudio Server (lower left panel).
* You now see the Linux shell on the IDEA Cluster
* `cd` (change directory) to enter your home directory using: `cd ~`
* Type `pwd` to confirm where you are
* In the Linux shell, `cd` to `DAR-Mars-F24`
* Type `git pull origin main` to pull any updates
* Always do this when you being work; we might have added or changed something!
* In the Linux shell, `cd` into `Assignment02`
* Type `ls -al` to list the current contents
* Don't be surprised if you see many files!
* In the Linux shell, type `git branch` to verify your current working branch
* If it is not `dar-yourrcs`, type `git checkout dar-yourrcs` (where `yourrcs` is your RCS id)
* Re-type `git branch` to confirm
* Now in the RStudio Server UI, navigate to the `DAR-Mars-F24/StudentNotebooks/Assignment02` directory via the **Files** panel (lower right panel)
* Under the **More** menu, set this to be your R working directory
* Setting the correct working directory is essential for interactive R use!
You're now ready to start coding Assignment 2!
## Creating your copy of the Assignment 2 notebook
1. In RStudio, make a **copy** of `dar-f24-assignment2-template.Rmd` file using a *new, original, descriptive* filename that **includes your RCS ID!**
* Open `dar-f24-assignment2-template.Rmd`
* **Save As...** using a new filename that includes your RCS ID
* Example filename for user `erickj4`: `erickj4-assignment2-f24.Rmd`
* POINTS OFF IF:
* You don't create a new filename!
* You don't include your RCS ID!
* You include `template` in your new filename!
2. Edit your new notebook using RStudio and save
* Change the `title:` and `subtitle:` headers (at the top of the file)
* Change the `author:`
* Don't bother changing the `date:`; it should update automagically...
* **Save** your changes
3. Use the RStudio `Knit` command to create an PDF file; repeat as necessary
* Use the down arrow next to the word `Knit` and select **Knit to PDF**
* You may also knit to HTML...
4. In the Linux terminal, use `git add` to add each new file you want to add to the repository
* Type: `git add yourfilename.Rmd`
* Type: `git add yourfilename.pdf` (created when you knitted)
* Add your HTML if you also created one...
5. When you're ready, in Linux commit your changes:
* Type: `git commit -m "some comment"` where "some comment" is a useful comment describing your changes
* This commits your changes to your local repo, and sets the stage for your next operation.
6. Finally, push your commits to the RPI github repo
* Type: `git push origin dar-yourrcs` (where `dar-yourrcs` is the branch you've been working in)
* Your changes are now safely on the RPI github.
7. **REQUIRED:** On the RPI github, **submit a pull request.**
* In a web browser, navigate to https://github.rpi.edu/DataINCITE/DAR-Mars-F24
* In the branch selector drop-down (by default says **master**), select your branch
* **Submit a pull request for your branch**
* One of the DAR instructors will merge your branch, and your new files will be added to the master branch of the repo. _Do not merge your branch yourself!_
# DAR ASSIGNMENT 2 (Part 2): Loading the Mars 2020 (M20) Datasets
In this assignment there are four datasets from separate instruments on the Mars Perserverance rover available for analysis:
* **Lithology:** Summarizes the mineral characteristics of samples collected at certain sample locations
* **PIXL:** Planetary Instrument for X-ray Lithochemistry of collected samples
* **SHERLOC:** Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals for collected samples
* **LIBS:** Laser-induced breakdown spectroscopy which are measured in many areas (not just samples)
Each dataset provides data about the mineralogy of the surface of Mars. Based on the purpose and nature of the instrument, the data is collected at different intervals along the path of Perseverance as it makes it way across the Jezero crater. Some of the data (esp. LIBS) is collected almost every Martian day, or _sol_. Some of the data (PIXL and SHERLOC) is only collected at certain sample locations of interest
Your objective is to perform an analysis of the your team's assigned dataset in order to learn all you can about these Mars samples.
NOTES:
* All of these datasets can be found in `/academics/MATP-4910-F24/DAR-Mars-F24/Data`
* We have included a comprehensive `samples.Rds` dataset that includes useful details about the sample locations, including Martian latitude and longitude and the sol that individual samples were collected.
* Also included is `rover.waypoints.Rds` that provides detailed location information (lat/lon) for the Perseverance rover throughout its journey, up to the present. This can be updated when necessary using the included `roverStatus-f24.R` script.
* A general guide to the available Mars 2020 data is available here: https://pds-geosciences.wustl.edu/missions/mars2020/index.htm
* Other useful MARS 2020 sites
https://science.nasa.gov/mission/mars-2020-perseverance/mars-rock-samples/ and https://an.rsl.wustl.edu/m20/AN/an3.aspx?AspxAutoDetectCookieSupport=1
* Note that PIXL, SHERLOC, and Lithology describe 16 sample that were physically collected. There will eventually be 38 samples. These datasets can be merged by sample. The LIBS data includes observations collected at many more locations so how to combine the LIBS data with the other datasets is an open research question.
## Data Set A: Load the Lithology Data
The first five features of the dataset describe twenty-four (24) rover sample locations.
The remaining features provides a simple binary (`1` or `0`) summary of presence or absence of 35 minerals at the 24 rover sample locations.
Only the first sixteen (16) samples are maintained, as the remaining are missing the mineral descriptors.
The following code "cleans" the dataset to prepare for analysis. It first creates a dataframe with metadata and measurements for samples, and then creates a matrix containing only numeric measurements for later analysis.
```{r}
# Load the saved lithology data with locations added
lithology.df<- readRDS("/academics/MATP-4910-F24/DAR-Mars-F24/Data/mineral_data_static.Rds")
# Cast samples as numbers
lithology.df$sample <- as.numeric(lithology.df$sample)
# Convert rest into factors
lithology.df[sapply(lithology.df, is.character)] <-
lapply(lithology.df[sapply(lithology.df, is.character)],
as.factor)
# Keep only first 16 samples because the data for the rest of the samples is not available yet
lithology.df<-lithology.df[1:16,]
# Create a matrix containing only the numeric measurements. The remaining features are metadata about the sample.
lithology.matrix <- sapply(lithology.df[,6:40],as.numeric)-1
```
## Data Set B: Load the PIXL Data
The PIXL data provides summaries of the mineral compositions measured at selected sample sites by the PIXL instrument. Note that here we scale pixl.mat so features have mean 0 and standard deviation so results will be different than in Assignment 1.
```{r}
# Load the saved PIXL data with locations added
pixl.df <- readRDS("/academics/MATP-4910-F24/DAR-Mars-F24/Data/samples_pixl_wide.Rds")
# Convert to factors
pixl.df[sapply(pixl.df, is.character)] <- lapply(pixl.df[sapply(pixl.df, is.character)],
as.factor)
# Make the matrix of just mineral percentage measurements
pixl.matrix <- pixl.df[,2:14] %>% scale()
```
## Data Set C: Load the LIBS Data
The LIBS data provides summaries of the mineral compositions measured at selected sample sites by the LIBS instrument, part of the Perseverance SuperCam.
```{r}
# Load the saved LIBS data with locations added
libs.df <- readRDS("/academics/MATP-4910-F24/DAR-Mars-F24/Data/supercam_libs_moc_loc.Rds")
#Drop features that are not to be used in the analysis for this notebook
libs.df <- libs.df %>%
select(!(c(distance_mm,Tot.Em.,SiO2_stdev,TiO2_stdev,Al2O3_stdev,FeOT_stdev,
MgO_stdev,Na2O_stdev,CaO_stdev,K2O_stdev,Total)))
# Convert the points to numeric
libs.df$point <- as.numeric(libs.df$point)
# Make the a matrix contain only the libs measurements for each mineral
libs.matrix <- as.matrix(libs.df[,6:13])
```
## Dataset D: Load the SHERLOC Data
The SHERLOC data you will be using for this lab is the result of scientists' interpretations of extensive spectral analysis of abrasion samples provided by the SHERLOC instrument.
**NOTE:** This dataset presents minerals as rows and sample sites as columns. You'll probably want to rotate the dataset for easier analysis....
```{r}
# Read in data as provided.
sherloc_abrasion_raw <- readRDS("/academics/MATP-4910-F24/DAR-Mars-F24/Data/abrasions_sherloc_samples.Rds")
# Clean up data types
sherloc_abrasion_raw$Mineral<-as.factor(sherloc_abrasion_raw$Mineral)
sherloc_abrasion_raw[sapply(sherloc_abrasion_raw, is.character)] <- lapply(sherloc_abrasion_raw[sapply(sherloc_abrasion_raw, is.character)],
as.numeric)
# Transform NA's to 0
sherloc_abrasion_raw <- sherloc_abrasion_raw %>% replace(is.na(.), 0)
# Reformat data so that rows are "abrasions" and columns list the presence of minerals.
# Do this by "pivoting" to a long format, and then back to the desired wide format.
sherloc_long <- sherloc_abrasion_raw %>%
pivot_longer(!Mineral, names_to = "Name", values_to = "Presence")
# Make abrasion a factor
sherloc_long$Name <- as.factor(sherloc_long$Name)
# Make it a matrix
sherloc.matrix <- sherloc_long %>%
pivot_wider(names_from = Mineral, values_from = Presence)
# Get sample information from PIXL and add to measurements -- assumes order is the same
sherloc.df <- cbind(pixl.df[,c("sample","type","campaign","abrasion")],sherloc.matrix)
# Measurements are everything except first column
sherloc.matrix<-as.matrix(sherloc.matrix[,-1])
```
## Data Set H: Sherloc + Lithology + PIXL
Create data frame and matrix from prior datasets by making on appropriate combinations.
```{r}
# Combine the Lithology and SHERLOC dataframes
sherloc_lithology_pixl.df <- cbind(sherloc.df, lithology.df, pixl.df )
# Combine the Lithology, SHERLOC and PIXLmatrices
sherloc_lithology_pixl.matrix <- cbind(sherloc.matrix,lithology.matrix,pixl.matrix)
# Z-score scaling on pixl data
pixl.matrix_z <- 1 / (1+ exp(-pixl.matrix))
sherloc_lithology_pixl_z.matrix <- cbind(sherloc.matrix,lithology.matrix,pixl.matrix_z)
```
# Analysis of Data (Part 3)
Dataset H: PIXL + Sherloc + Lithograpy (with appropriate scaling as necessary. Not scaled yet.)
1. _Describe the data set contained in the data frame and matrix:_ How many rows does it have and how many features? Which features are measurements and which features are metadata about the samples? (3 pts)
In the data set H, there are 16 rows and 99 features in the data frame.
Measurement features: Chemical Oxides like "Na20", "Mgo", "Si02", etc. Mineral/Compound phase like "Sulfate", "Quartz", "Halite", etc.
Metadata features: "sample", "type", "location", etc.
2. _Scale this data appropriately (you can choose the scaling method or decide to not scale data):_ Explain why you chose a scaling method or to not scale. (3 pts)
I choose to scale the PIXL data but not the Sherloc or Lithography data because only PIXL data is non-binary. For the scaling I use the Z-score scaling in order to make range of the PIXL data goes between 0 and 1. For Sherloc or Lithography data, scaling is not necessary since they are binary data. I still will go through all the process with the non-scaled data and the scaled data to compare the difference.
3. _Cluster the data using k-means or your favorite clustering method (like hierarchical clustering):_ Describe how you picked the best number of clusters. Indicate the number of points in each clusters. (6 pts)
I used K-means clustering. I picked K based on the "elbow" on the wssplot. With the scaled data, I chose K for 7. The number of points from cluster 1 to 7 is "2 2 2 4 3 2 1". Each For the data set without the scaling, I chose K for 6. The number of points from cluster 1 to 6 is "1 3 7 1 2 2". You can see from this two clustering. Obviously the one with the 7 cluster groups spread more evenly. The final answer is that K = 7.
```{r}
set.seed(400)
# insert wssplot function
wssplot <- function(data, nc=15, seed=55){
wss <- data.frame(cluster=1:nc, quality=c(0))
for (i in 1:nc){
set.seed(seed)
wss[i,2] <- kmeans(data, centers=i)$tot.withinss}
ggplot(data=wss,aes(x=cluster,y=quality)) +
geom_line() +
ggtitle("Quality of k-means by Cluster")
}
# Apply `wssplot()` to our data (z-score on PIXL only)
wssplot(sherloc_lithology_pixl_z.matrix, nc=11, seed=400)
wssplot(sherloc_lithology_pixl.matrix, nc=11, seed=400)
# Use our chosen 'k' to perform k-means clustering
set.seed(400)
# z-score scaling method for PIXL not sherloc
k <- 7
km <- kmeans(sherloc_lithology_pixl_z.matrix,k)
# scaling method
k1 <- 6
km1 <- kmeans(sherloc_lithology_pixl.matrix,k1)
#
pheatmap(km$centers,scale="none")
pheatmap(km1$centers,scale="none")
cluster.df<-data.frame(cluster=1:7, size=km$size)
kable(cluster.df,caption="Samplespercluster")
cluster.df<-data.frame(cluster=1:6, size=km1$size)
kable(cluster.df,caption="Samplespercluster")
```
4. _Perform a **creative analysis** that provides insights into what one or more of the clusters are and what they tell you about the MARS data: Alternatively do another creative analysis of your datasets that leads to one of more findings. Make sure to explain what your analysis and discuss your the results.
```{r}
slp.pca<-prcomp(sherloc_lithology_pixl_z.matrix,scale=FALSE)
ggscreeplot(slp.pca)
summary(slp.pca)
```
Together, the first three components explain about 69.09% of the variance, which explain most of the variance.
```{r}
filtered <- apply(sherloc_lithology_pixl_z.matrix, 2, function(x) var(x) == 0)
filtered_slp <- sherloc_lithology_pixl_z.matrix[, !filtered]
pca_result <- prcomp(filtered_slp, center = TRUE, scale. = TRUE)
pca_data <- data.frame(PC1 = pca_result$x[,1], PC2 = pca_result$x[,2],
Cluster = factor(km$cluster))
ggplot(pca_data, aes(x = PC1, y = PC2, color = Cluster)) +
geom_point(size = 3) +
labs(title = "PCA by Cluster", x = "PC1", y = "PC2") +
theme_minimal()
pca_data <- data.frame(PC1 = pca_result$x[,1], PC3 = pca_result$x[,3],
Cluster = factor(km$cluster))
ggplot(pca_data, aes(x = PC1, y = PC3, color = Cluster)) +
geom_point(size = 3) +
labs(title = "PCA by Cluster", x = "PC1", y = "PC3") +
theme_minimal()
```
1. Cluster 1 (Pink):
PC1 vs. PC2 plot: Cluster 1 appears as two data points clustered tightly together near (PC1 = 4, PC2 = 0). This suggests that these points have very similar features in the first two principal components.
PC1 vs. PC3 plot: In the second plot, these same two points are also tightly clustered, but closer to the bottom near (PC1 = 4, PC3 = -3). This indicates a relatively consistent relationship in PC1 but a significant difference in PC3.
2. Cluster 2 (Brown):
PC1 vs. PC2 plot: Cluster 2 has a single point near (PC1 = 0, PC2 = -1), which means it lies near the origin, showing moderate values in both principal components.
PC1 vs. PC3 plot: The position remains relatively central around (PC1 = 0, PC3 = 0), meaning this data point doesn't exhibit much variability in either PC2 or PC3 compared to other clusters.
3. Cluster 3 (Green):
PC1 vs. PC2 plot: Cluster 3 is further spread out along the PC1 axis, with its point near (PC1 = 5, PC2 = -2). This suggests that Cluster 3 has unique characteristics with high variance in PC1.
PC1 vs. PC3 plot: This cluster stays distant from the others, with a significant value in PC3 around (PC1 = 4, PC3 = 5), emphasizing the distinct separation in both PC1 and PC3.
4. Cluster 4 (Cyan):
PC1 vs. PC2 plot: Points in Cluster 4 are somewhat scattered around (PC1 = -2 to 0, PC2 = 0 to 3). This shows moderate variability in PC2 but not much in PC1.
PC1 vs. PC3 plot: The points remain moderately spread in PC3, but with a slight upward shift near (PC1 = -1 to 0, PC3 = 0 to 3). This suggests moderate variation in PC3 as well.
5. Cluster 5 (Blue):
PC1 vs. PC2 plot: Cluster 5 shows several points dispersed in the upper-left area, between (PC1 = -4 to -3, PC2 = 4 to 6). This wide spread in both PC1 and PC2 suggests considerable variation.
PC1 vs. PC3 plot: The blue points are spread out between (PC1 = -4 to -2, PC3 = 3 to 4), indicating variance in both PC1 and PC3, though less so in PC3.
6. Cluster 6 (Purple):
PC1 vs. PC2 plot: Cluster 6 is an outlier in the bottom-left quadrant with a point near (PC1 = -5, PC2 = -5). This suggests it is very distinct in both PC1 and PC2.
PC1 vs. PC3 plot: This point is also an outlier on the lower side with (PC1 = -5, PC3 = -3), reinforcing that Cluster 6 is unique in all three components.
7. Cluster 7 (Red):
PC1 vs. PC2 plot: Cluster 7 lies close to Cluster 1, near (PC1 = 3, PC2 = 0), indicating a similarity in PC1 but with slight variation in PC2.
PC1 vs. PC3 plot: However, the separation becomes more visible here, as it’s positioned around (PC1 = 3, PC3 = -2), suggesting divergence in PC3.
Summary:
Clusters 5 (blue) and 6 (purple) exhibit the most distinct separation from the other clusters, with high variability in both PC1 and PC2, as well as in PC3.
Clusters 1 (pink) and 7 (red) are closely clustered, with minimal separation in PC1, and their difference becomes more apparent in PC3.
Clusters 2 (brown) and 4 (cyan) occupy more central positions in the PCA plots, showing less variation compared to the other clusters.
# Preparation of Team Presentation (Part 4)
Prepare a presentation of your teams result to present in class on **September 11** starting at 9am in AE217 (20 pts)
The presentation should include the following elements
0.Your teams names and members
1. A **Description** of the data set that you analyzed including how many observations and how many features. (<= 1.5 mins)
2. Each team member gets **three minutes** to explain their analysis:
* what analysis they performed
* the results of that analysis
* a brief discussion of their interpretation of these results
* <= 18 mins _total!_
3. A **Conclusion** slide indicating major findings of the teams (<= 1.5 mins)
4. Thoughts on **potential next steps** for the MARS team (<= 1.5 mins)
* A template for your team presentation is included here: https://bit.ly/dar-template-f24
* The rubric for the presentation is here:
https://docs.google.com/document/d/1-4o1O4h2r8aMjAplmE-ItblQnyDAKZwNs5XCnmwacjs/pub
* Post a link to your teams presentation in the MARS webex chat before class. You can continue to edit until the last minute.
# When you're done: SAVE, COMMIT and PUSH YOUR CHANGES!
When you are satisfied with your edits and your notebook knits successfully, remember to push your changes to the repo using the following steps:
* `git branch`
* To double-check that you are in your working branch
* `git add <your changed files>`
* `git commit -m "Some useful comments"`
* `git push origin <your branch name>`
* do a pull request
# APPENDIX: Accessing RStudio Server on the IDEA Cluster
The IDEA Cluster provides seven compute nodes (4x 48 cores, 3x 80 cores, 1x storage server)
* The Cluster requires RCS credentials, enabled via registration in class
* email John Erickson for problems `erickj4@rpi.edu`
* RStudio, Jupyter, MATLAB, GPUs (on two nodes); lots of storage and computes
* Access via RPI physical network or VPN only
# More info about Rstudio on our Cluster
## RStudio GUI Access:
* Use:
* http://lp01.idea.rpi.edu/rstudio-ose/
* http://lp01.idea.rpi.edu/rstudio-ose-3/
* http://lp01.idea.rpi.edu/rstudio-ose-6/
* http://lp01.idea.rpi.edu/rstudio-ose-7/
* Linux terminal accessible from within RStudio "Terminal" or via ssh (below)