-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Notes from course 31 Oct 2017 #136
Comments
When running RStudio in a docker it does not allow you to save files to disk... |
Add SCnorm/quantile normalization. |
Use normalizeExprs() function in scater in deal with confounders section (should be the same as glm). |
error in calc_cpm function in 3.14.3 |
Add cyclone for cell-cycle analysis/regression? |
Add "Truth" section to beginning of Bio Analysis chapter discussing the data and expectations. (Fix cell-type labels for TE & ICM).
|
Clustering for rare cell-types |
Analysis of single cell RNA-seq data - Tue 31 Oct 2017 Questions for the Presenters: There are several recent articles aimed at addressing the large quantity of zeros that appear in the expression matrix of SC-RNA seq. Can someone talk a bit about the correct way to deal with the very sparse matrix that you get after constructing the expression matrix? We will be talking about this later in the course in more detail but briefly, the frequency of zeros is closely related to the gene expression level of each gene so they can be considered informative. There are imputation methods available as well but we do not recommend using them except for visualizations as they tend to introduce circularity into your data thus increasing your Type I error rate for any statistical tests you employ. In general the most useful approach to dealing with the sparsity is to perform feature selection on genes (and/or dimensionality reduction), to remove uninformative noise, which often includes most of the very sparsely sampled genes, fro If you wonder how Fig. 3.2 was generated - instructions are here: #120 Could you explain what the difference (advantages) is between doing the data visualization with UMIs vs Reads. Are both necessary? If you have UMIs you should always use them instead of the reads, because they are much cleaner. If you only have reads (SMART-Seq) you have no other option except using the reads. In the course we considered both just for exercise purposes. In the absence of a batch effect, or if we are only looking at samples from the same batch, how does CPM compares to logNormalization? Which one you would recommend? logNormalisation is not a real normalisation. It's a way of bringing all your values on the same scale. (if I understand your definition of logNormalisation correctly) You still need to log-transform your data after doing CPM. Does it answer your question? I was thinking about the logNormalization method from the package Seurat, for example: normalizes the gene expression measurements for each cell by the total expression, multiplies this by a scale factor (10,000 by default), and log-transforms the result. How would this be comparable to CPM? I think after the scaling and log transform they might be similar. It's exactly the same as log2-CPM only scaled by 10,000 rather than 1,000,000. IMO it makes most sense to scale to the median total counts of your particular dataset rather than some external reference (i.e. 10,000 or 1,000,000) unless you are comparing across multiple datasets. The most recent comparison of DE methods for scRNA-seq data |
The text was updated successfully, but these errors were encountered: