hello,大家好,今天我們來回顧一下,不講新的東西,回顧一下scanpy做10X單細胞(10X空間轉錄組)多樣本整合的方法,這個方法跟Seurat差別比較大,目前引用scanpy做多樣本整合的文獻還比較少,原因么,可能是seurat太過權威了吧,但是個人覺得還是很有智慧的,今天我們來回顧一下這個過程,探討一下這里面需要注意的點。
首先來看看簡介,The ingest function assumes an annotated reference dataset that captures the biological variability of interest.(這一句話就很重要,首先需要一個注釋好的參考數據集,然后來“捕獲”疾病樣本的生物學變化) 。The rational(理論) is to fit a model on the reference data and use it to project new data(用參考數據集擬合一個model,從而來插入新的數據集). For the time being, this model is a PCA combined with a neighbor lookup search tree, for which we use UMAP’s implementation
這塊兒需要注意了,做整合分析的時候,scanpy需要我們有一個注釋好的data作為reference,從理性上講應該這樣,但很多時候我們用Seurat做整合用不到這個,個人欣賞scanpy的做法,更為理性。
一些特點
- As ingest is simple and the procedure clear, the workflow is transparent and fast.
- Like BBKNN, ingest leaves the data matrix itself invariant.
- Unlike BBKNN, ingest solves the label mapping problem (like scmap) and maintains an embedding that might have desired properties like specific clusters or trajectories.
我們來看看scanpy給出的官方示例
We refer to this asymmetric dataset integration as ingesting annotations from an annotated reference adata_ref
into an adata
that still lacks this annotation(跟上面介紹的一致). It is different from learning a joint representation that integrates datasets in a symmetric way as BBKNN, Scanorma, Conos, CCA(CCA是Seurat做整合常用的方法) (e.g. in Seurat) or a conditional VAE (e.g. in scVI, trVAE) would do, but comparable to the initiall MNN implementation in scran(scran這個軟件目前用的也比較少,更多的是需要這個軟件其中一部分的功能)。不過scanpy做樣本整合的思路和方法確實有別于其他的軟件。
加載模塊和示例數據
import scanpy as sc
import pandas as pd
import seaborn as sns
adata_ref = sc.datasets.pbmc3k_processed() # this is an earlier version of the dataset from the pbmc3k tutorial
adata = sc.datasets.pbmc68k_reduced()
第一個值得注意的地方
var_names = adata_ref.var_names.intersection(adata.var_names)
adata_ref = adata_ref[:, var_names]
adata = adata[:, var_names]
這個地方需要一點小心,首先高變基因是adata的,但是對于ref數據集并不再計算高變基因,而是直接用adata的高變基因與ref的所有基因取交集,然后對兩個數據集進行切割,保留分析用到的數據,這個地方需要注意,因為涉及到很多的問題,比如高變基因的數量,挑選閾值等等,個人也欣賞這個方法,可以在降維聚類的過程中體現出正常和疾病數據集的差異。
對ref進行降維聚類,疾病樣本的高變基因提取ref數據集,然后對數據集進行降維聚類,
sc.pp.pca(adata_ref)
sc.pp.neighbors(adata_ref)
sc.tl.umap(adata_ref)
sc.pl.umap(adata_ref, color='louvain')
圖片.png
圖片不是ref本身的高變基因進行降維聚類的結果,因為不同細胞類型之間有混合,這樣的目的就是在與整合疾病數據集。
轉換標簽
Let’s map labels and embeddings from adata_ref to adata based on a chosen representation. Here, we use adata_ref.obsm['X_pca'] to map cluster labels and the UMAP coordinates.
sc.tl.ingest(adata, adata_ref, obs='louvain') #現在更多是leiden了。
adata.uns['louvain_colors'] = adata_ref.uns['louvain_colors'] # fix colors
sc.pl.umap(adata, color=['louvain', 'bulk_labels'], wspace=0.5)
圖片.png
進行數據整合
adata_concat = adata_ref.concatenate(adata, batch_categories=['ref', 'new'])
adata_concat.obs.louvain = adata_concat.obs.louvain.astype('category')
adata_concat.obs.louvain.cat.reorder_categories(adata_ref.obs.louvain.cat.categories, inplace=True) # fix category ordering
adata_concat.uns['louvain_colors'] = adata_ref.uns['louvain_colors'] # fix category colors
sc.pl.umap(adata_concat, color=['batch', 'louvain'])
圖片.png
While there seems to be some batch-effect in the monocytes and dendritic cell clusters, the new data is otherwise mapped relatively homogeneously.存在一定的批次效應。
The megakaryoctes are only present in adata_ref and no cells from adata map onto them. If interchanging reference data and query data, Megakaryocytes do not appear as a separate cluster anymore. This is an extreme case as the reference data is very small; but one should always question if the reference data contain enough biological variation to meaningfully accomodate query data.
整合數據進行批次矯正
sc.tl.pca(adata_concat)
sc.external.pp.bbknn(adata_concat, batch_key='batch')
sc.tl.umap(adata_concat)
sc.pl.umap(adata_concat, color=['batch', 'louvain'])
圖片.png
bbknn的矯正方法還是可以的,但是有一個問題,不利于發現新的細胞類型
Density plot
sc.tl.embedding_density(adata_concat, groupby='batch')
sc.pl.embedding_density(adata_concat, groupby='batch')
Partial visualizaton of a subset of groups in embedding
for batch in ['1', '2', '3']:
sc.pl.umap(adata_concat, color='batch', groups=[batch])
需要更多的探討