因果推斷1:PSM傾向得分匹配法

內(nèi)容來自【顧先生聊數(shù)據(jù)】的 PSM傾向得分匹配法【上篇:理論篇】PSM傾向得分匹配法【下篇:python實(shí)操篇】

定義:PSM傾向得分匹配法,通過對(duì)數(shù)據(jù)建模,為每個(gè)用戶擬合?個(gè)概率(多維特征擬合成一維的概率),在對(duì)照組樣本中尋找和實(shí)驗(yàn)組最接近的樣本,從而進(jìn)行比較。

前提

  1. 條件獨(dú)立假設(shè)。在接受實(shí)驗(yàn)之前,對(duì)照組和實(shí)驗(yàn)組之間沒有差異,實(shí)驗(yàn)組產(chǎn)生的效應(yīng)完全來自實(shí)驗(yàn)處理。
  2. 共同?撐假設(shè)。在理想情況下,實(shí)驗(yàn)組的個(gè)體,都能在對(duì)照組中找到對(duì)應(yīng)的個(gè)體。如下圖,兩組共同取值較少時(shí),就不適用PSM了

代碼
1、導(dǎo)入相關(guān)的python庫,path及model設(shè)置

import psmatching.match as psm
import pytest
import pandas as pd
import numpy as np
from psmatching.utilities import *
import statsmodels.api as sm

#地址
path = "E:/pythonFile/data/psm/psm_gxslsj_data.csv"  
#model由干預(yù)項(xiàng)和其他類別標(biāo)簽組成,形式為"干預(yù)項(xiàng)~類別特征+列別特征。。。"
model = "PUSH ~ AGE + SEX + VIP_LEVEL + LASTDAY_BUY_DIFF + PREFER_TYPE + LOGTIME_PREFER + USE_COUPON_BEFORE + ACTIVE_LEVEL"
#想要幾個(gè)匹配項(xiàng),如k=3,那一個(gè)push=1的用戶就會(huì)匹配三個(gè)push=0的近似用戶
k = "3"
m = psm.PSMatch(path, model, k)

2、獲得傾向性匹配得分

df = pd.read_csv(path)
df = df.set_index("use_id")    #將use_id作為數(shù)據(jù)的新索引,這里可以替換成自己想要的索引字段
print("\n計(jì)算傾向性匹配得分 ...", end = " ")
#利用邏輯回歸框架計(jì)算傾向得分,即廣義線性估計(jì) + 二項(xiàng)式Binomial
glm_binom = sm.formula.glm(formula = model, data = df, family = sm.families.Binomial())
#擬合給定family的廣義線性模型
#https://www.w3cschool.cn/doc_statsmodels/statsmodels-generated-statsmodels-genmod-generalized_linear_model-glm-fit.html?lang=en
result = glm_binom.fit()
# 輸出回歸分析的摘要
# print(result.summary)
propensity_scores = result.fittedvalues
print("\n計(jì)算完成!")
#將傾向性匹配得分寫入data
df["PROPENSITY"] = propensity_scores
df

3、區(qū)分干預(yù)與非干預(yù)
groups是干預(yù)項(xiàng),propensity是傾向性匹配得分,這里要分開干預(yù)與非干預(yù),且確保n1<n2

groups = df.PUSH                        #將PUSH替換成自己的干預(yù)項(xiàng)
propensity = df.PROPENSITY
#把干預(yù)項(xiàng)替換成True和False
groups = groups == groups.unique()[1]
n = len(groups)
#計(jì)算True和False的數(shù)量
n1 = groups[groups==1].sum()
n2 = n-n1
g1, g2 = propensity[groups==1], propensity[groups==0]
#確保n2>n1,,少的匹配多的,否則交換下
if n1 > n2:
    n1, n2, g1, g2 = n2, n1, g2, g1

m_order = list(np.random.permutation(groups[groups==1].index))    #隨機(jī)排序?qū)嶒?yàn)組,減少原始排序的影響

4、根據(jù)傾向評(píng)分差異將干預(yù)組與對(duì)照組進(jìn)行匹配
注意:caliper = None可以替換成自己想要的精度

matches = {}
k = int(k)
print("\n給每個(gè)干預(yù)組匹配 [" + str(k) + "] 個(gè)對(duì)照組 ... ", end = " ")
for m in m_order:
    # 計(jì)算所有傾向得分差異,這里用了最粗暴的絕對(duì)值
    # 將propensity[groups==1]分別拿出來,每一個(gè)都與所有的propensity[groups==0]相減
    dist = abs(g1[m]-g2)
    array = np.array(dist)
    #如果無放回地匹配,最后會(huì)出現(xiàn)要選取3個(gè)匹配對(duì)象,但是只有一個(gè)候選對(duì)照組的錯(cuò)誤,故進(jìn)行判斷
    if k < len(array):
        # 在array里面選擇K個(gè)最小的數(shù)字,并轉(zhuǎn)換成列表
        k_smallest = np.partition(array, k)[:k].tolist()
        # 用卡尺做判斷
        caliper = None
        if caliper:
            caliper = float(caliper)
            # 判斷k_smallest是否在定義的卡尺范圍
            keep_diffs = [i for i in k_smallest if i <= caliper]
            keep_ids = np.array(dist[dist.isin(keep_diffs)].index)
        else:
            # 如果不用標(biāo)尺判斷,那就直接上k_smallest了
            keep_ids = np.array(dist[dist.isin(k_smallest)].index)
        #  如果keep_ids比要匹配的數(shù)量多,那隨機(jī)選擇下,如要少,通過補(bǔ)NA配平數(shù)量
        if len(keep_ids) > k:
            matches[m] = list(np.random.choice(keep_ids, k, replace=False))
        elif len(keep_ids) < k:
            while len(matches[m]) <= k:
                matches[m].append("NA")
        else:
            matches[m] = keep_ids.tolist()
        # 判斷 replace 是否放回
        replace = False
        if not replace:
            g2 = g2.drop(matches[m])
print("\n匹配完成!")

5、將匹配完成的結(jié)果合并起來

matches = pd.DataFrame.from_dict(matches, orient="index")
matches = matches.reset_index()
column_names = {}
column_names["index"] = "干預(yù)組"
for i in range(k):
    column_names[i] = str("匹配對(duì)照組_" + str(i+1))
matches = matches.rename(columns = column_names)
matches
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

推薦閱讀更多精彩內(nèi)容