tensorflow自然語言處理-TF-IDF

寫在前面

  • 態度決定高度!讓優秀成為一種習慣!
  • 世界上沒有什么事兒是加一次班解決不了的,如果有,就加兩次?。? - -茂強)

TF-IDF

  • 先看公式
TF-IDF

這個公式并不是一個很好的公式,一版的都用經過平滑的公司,避免分母為0的情況
本文采用的是sklearn的默認公司

  tf-idf(d, t) = tf(t) * idf(d, t)
  idf(d, t) = log [ n / (df(d, t) + 1) ])
  # t就是詞
  • 數據準備
    從文檔中讀取數據
    讀取后的數據如下:
文本數據
標簽數據
  • 聲明依賴以及靜態參數
    import tensorflow as tf
    import matplotlib.pyplot as plt
    import re
    import numpy as np
    from sklearn.feature_extraction.text import TfidfVectorizer
    sess = tf.Session()
    batch_size= 200
    max_featurtes = 10000

  • 利用sklearn.feature_extraction.text中的TfidfVectorizer對文本進行向量化
    def tokenizer(text):
    words = text.split(" ")
    return words
    stop_words = set()
    tfidf = TfidfVectorizer(tokenizer=tokenizer,stop_words=stop_words,max_features=max_featurtes)
    sparse_tfidf_texts = tfidf.fit_transform(texts)

  • 把數據分成訓練集和測試集
    train_indices = np.random.choice(sparse_tfidf_texts.shape[0],round(0.8*sparse_tfidf_texts.shape[0]), replace=False)
    test_indices = np.array(list(set(range(sparse_tfidf_texts.shape[0])) -set(train_indices)))
    texts_train = sparse_tfidf_texts[train_indices]
    texts_test = sparse_tfidf_texts[test_indices]
    target_train = np.array([x for ix, x in enumerate(target) if ix in train_indices])
    target_test = np.array([x for ix, x in enumerate(target) if ix in test_indices])

  • 定義邏輯回歸模型的變量和placeholder
    A = tf.Variable(tf.random_normal(shape=[max_featurtes,1]))
    b = tf.Variable(tf.random_normal(shape=[1,1]))
    # Initialize placeholders
    x_data = tf.placeholder(shape=[None, max_featurtes], dtype=tf. float32)
    y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)

  • 定義模型和損失函數
    model_output = tf.add(tf.matmul(x_data, A), b)
    loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=model_output, logits=y_target))

  • 定義預測方程和精確度計算
    prediction = tf.round(tf.sigmoid(model_output))
    predictions_correct = tf.cast(tf.equal(prediction, y_target),tf.float32)
    accuracy = tf.reduce_mean(predictions_correct)

  • 定義優化算法以及初始化變量
    my_opt = tf.train.GradientDescentOptimizer(0.05)
    train_step = my_opt.minimize(loss)
    # Intitialize Variables
    init = tf.initialize_all_variables()
    sess.run(init)

  • 開始訓練模型
    train_loss = []
    test_loss = []
    train_acc = []
    test_acc = []
    i_data = []
    for i in range(10000):
    rand_index = np.random.choice(texts_train.shape[0],size=batch_size)
    rand_x = texts_train[rand_index].todense()
    rand_y = np.transpose([target_train[rand_index]])
    sess.run(train_step, feed_dict={x_data: rand_x, y_target:rand_y})
    # Only record loss and accuracy every 100 generations
    if (i+1)%100==0:
    i_data.append(i+1)
    train_loss_temp = sess.run(loss, feed_dict={x_data:rand_x, y_target: rand_y})
    train_loss.append(train_loss_temp)
    test_loss_temp = sess.run(loss, feed_dict={x_data: texts_test.todense(), y_target: np.transpose([target_test])})
    test_loss.append(test_loss_temp)
    train_acc_temp = sess.run(accuracy, feed_dict={x_data:rand_x, y_target: rand_y})
    train_acc.append(train_acc_temp)
    test_acc_temp = sess.run(accuracy, feed_dict={x_data:texts_test.todense(), y_target: np.transpose([target_test])})
    test_acc.append(test_acc_temp)
    if (i+1)%500==0:
    acc_and_loss = [i+1, train_loss_temp, test_loss_temp,train_acc_temp, test_acc_temp]
    acc_and_loss = [np.round(x,2) for x in acc_and_loss]
    print('Generation # {}. Train Loss (Test Loss): {:.2f}({:.2f}). Train Acc (Test Acc): {:.2f} ({:.2f})'.format(*acc_and_loss))
    其中每個批次喂給模型的數據如下圖


    rand_x
rand_y
  • 最后就是畫出訓練時的損失函數的計算結果圖和精確度
    這里沒有去調整參數,請讀者自行調整參數進行訓練,以達到更好的效果

    plt.figure(1) # 創建圖表1
    x = [i for i in range(0, len(train_loss))]
    plt.plot(x, train_loss,"b-*")
    plt.plot(x, test_loss,"r-+")
    plt.figure(2) # 創建圖表2
    x = [i for i in range(0, len(train_acc))]
    plt.plot(x, train_acc,"b-*")
    plt.plot(x, test_acc,"r-+")
    plt.show()
    
loss
acc
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容