音頻信號處理

信號(Signal)篇 (待補充)

什么是信號?說白了就是一個時間序列,非嚴格定義:x = x(0), x(1), ... x(n),其中,n為序列長度,此為離散信號。x = x(n), n \in R^{+},此為連續信號。

  • 采(集)樣(本)(sampling):從連續數據中,采集到離散的數據點。采樣有快有慢,由“采樣率”來衡量,單位為赫茲(Hz)。比如,采樣率為1Hz,則每秒獲得一個樣本點,以此類推。
  • 采樣定理(sampling theorem):采樣率最低應該為信號中的最高頻率的兩倍,否則,會產生頻率混疊的現象。
  • 混疊(alias):由于采樣率不足,同樣的采樣點,可以對應不同頻率的信號。那么,頻域里面就會解析出原本高采樣率下的高頻信號,映射到低采樣率下的低頻信號,與原本的頻率產生交叉,即“混疊”。
  • 量化(quantize):離散化數字的存儲,需要多個數字位才能最終存儲。
  • 編碼(encode):通過某種方式進行編碼。
  • 壓縮(compress):信號可能會存在冗余,此時,可以通過某種手段,在不損失原始信息或者盡可能小地損失原始信息的情況下,對存儲空間進行優化,此為壓縮。
  • 卷積(convolve):f(a*b) = \sum(a[n], b[N-n])
  • 自/互相關(correlation):corr =。是某種卷積,可以衡量出兩個信號之間哪一段最相似。
  • 模擬(連續)信號(Analog signal):見上面描述;
  • 數字(離散)信號(Digital signal):見上面描述;一圖亮之:
  • ADC(Analog digital converter):模數轉換流程為:
  • DAC(Digital analog converter):數模轉換流程為:
  • 濾波器(Filter):對信號進行某個操作的函數。
  • FIR/IIR(Finite/Infinite Impulse Response):
  • 梳狀(comb)/陷波(notch):
  • 調制(Modulate):將信號以某種方式進行處理,比如:幅度調制,相位調制等。為什么要調制?(低頻不利于傳輸)
  • 解調(Demodulate):將調制的信號,反調制的操作,稱為解調。
  • 頻率/周期(frequency/period):頻率:每秒發生多少次,周期:一次完整信號用的時間。所以有: f = \frac{1}{T}。比如一個正弦信號,初相為0來考慮:是通過一定角速度旋轉之后得到的,sin(\omega t) = sin(\frac{2\pi}{T} t) = sin(2\pi f t)
  • 角度/相位(angle/phase):sin(\omega t + \phi_0),相位為\omega t + \phi_0,初相為\phi_0
  • 傅里葉變換(fourier transform):f(x) = \sum^{N} (a_n cos(nt) + b_n sin(nt)),一個信號可以近似地由若干不同頻率的正弦信號表示。在變換之后,獲得了不同頻率的正弦信號,所以,自然而然可以把不同信號前面的系數a_nb_n作為幅度,獲得信號的頻譜圖。
  • 頻譜(spectrum):一段信號經過上述fft之后的信號頻率統計圖。
  • 相位譜:
  • 能量(energy):定義為x^2
  • 頻段(frequency band):表示某個頻率段。[a, b] Hz
  • 帶寬(bandwidth):在模擬信號系統又叫頻寬,是指在固定的時間可傳輸的資料數量,亦即在傳輸管道中可以傳遞數據的能力。通常以每秒傳送周期或赫茲(Hz)來表示。
  • 基線/基帶(baseband):沒有經過調制(進行頻譜搬移和變換)的原始電信號。
  • 諧波(harmonic):比如某個頻率為f_0,其諧波即為該頻率的整數倍k f_0, k \in N
  • 工頻():現代電傳輸在某個頻率下,所以電器會有該頻率及其諧波。以我國電力系統為例,家用電為220V,50Hz。因而,在50Hz及其整數倍會出現工頻及諧波干擾。一些電信號受該干擾較為嚴重,比如肌電信號。
  • 時頻譜(spectrogram):頻譜沒有考慮時間上的信息,而時頻譜通過加窗,或者其他操作(如小波中的伸縮等操作),來達到頻率信息和時間信息的trade-off。
  • 窗函數:為了防止能量泄露等,使用某種函數,對原始信號進行處理,使得原信號更像周期信號。
  • 短時傅里葉變換(short-time fourier transform):主要的思想是,通過分幀加窗,滑動窗口,對每個幀內的短時間信號,進行傅里葉變換。如此操作之后,能分別得到各個幀的傅里葉變換結果,可以表示出該短時窗的頻譜圖。由不同短時窗的頻譜圖組成的圖像,就是時頻圖。
  • 小波變換(wavelet transform):個人認為:傅里葉變換使用的是正弦波基底進行分解。而小波變換則是使用不同的基底進行分解,而這些基底被稱為小波函數。
  • Gabor變換(Gabor transform):基于Gabor分析的理論。
  • WVD(Wigner-Ville Distibution):(偽)WVD。

聲音/樂音(Acoustic/Music)

聲音是一種機械波(mechanic wave),由振動(vibration)產生,經過不同介質(medium)傳播之后,到達接收端。
樂音/噪聲:取決于當前是否悅耳,當前悅耳的則為樂音,否則則為噪聲。
音頻格式:主要分為有損(經過壓縮)和無損兩種。有損常見的有:mp3等,無損的常見有:wav,flac等。wav由于較為普遍,大部分的音頻數據集使用的均為wav格式。


實操部分:


1. 讀取及可視化:

# 使用wave進行wav讀取。
import wave
# Import audio file as wave object
good_morning = wave.open("good-morning.wav", "r")
# Convert wave object to bytes
good_morning_soundwave = good_morning.readframes(-1)
# View the wav file in byte form
good_morning_soundwave
# Output:
b'\xfd\xff\xfb\xff\xf8\xff\xf8\xff\xf7\...
# wave讀取之后為bytes,需要轉換為更加有用的數值格式,比如int16。然后打印出前10個樣本。
import numpy as np
# Convert soundwave_gm from bytes to integers
signal_gm = np.frombuffer(soundwave_gm, dtype='int16')
# Show the first 10 itemssignal_gm[:10]
# Output:
array([ -3,  -5,  -8,  -8,  -9, -13,  -8, -10,  -9, -11], dtype=int1
# 可以獲得采樣率等信息。
# Get the frame rateframe
rate_gm = good_morning.getframerate()
# Show the frame rateframerate_gm
# Output:
48,000
# 獲得時間戳信息。
# Return evenly spaced values between start and stop
np.linspace(start=1, stop=10, num=10)
# Output:
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])
# Get the timestamps of the good morning sound wave
time_gm = np.linspace(start=0, 
                      stop=len(soundwave_gm)/framerate_gm,
                      num=len(soundwave_gm))
# View first 10 time stamps of good morning sound wave
time_gm[:10]
# Output:
array([0.00000000e+00, 2.08334167e-05, 4.16668333e-05, 6.25002500e-05, 
       8.33336667e-05, 1.04167083e-04, 1.25000500e-04, 1.45833917e-04, 
       1.66667333e-04, 1.87500750e-04])
import matplotlib.pyplot as plt
# Initialize figure and setup title
plt.title("Good Afternoon vs. Good Morning")
# x and y axis labels
plt.xlabel("Time (seconds)")
plt.ylabel("Amplitude")
# Add good morning and good afternoon values
plt.plot(time_ga, soundwave_ga, label ="Good Afternoon")
plt.plot(time_gm, soundwave_gm, label="Good Morning", alpha=0.5)
# Create a legend and show our plot
plt.legend()
plt.show()
good morning vs good afternoon

2. 語音識別庫

Some existing python libraries:

  • CMU Sphinx
  • Kaldi
  • SpeechRecognition
  • Wav2letter++ by Facebook

此處使用:SpeechRecognition庫:
pip install SpeechRecognition

# Import the SpeechRecognition library
import speech_recognition as sr
# Create an instance of Recognizer
recognizer = sr.Recognizer()
# Set the energy threshold
recognizer.energy_threshold = 300
# Recognizer class has built-in functions which interact with speech APIs 

# - recognize_bing()
# - recognize_google()
# - recognize_google_cloud()
# - recognize_wit()
# Input: audio_file
# Output: transcribed speech from audio_file
# Import SpeechRecognition library
import speech_recognition as sr
# Setup recognizer instance
recognizer = sr.Recognizer()
# Read in audio file
clean_support_call = sr.AudioFile("clean-support-call.wav")
# Check type of clean_support_call
type(clean_support_call)

輸出:<class 'speech_recognition.AudioFile'>

# clean_support_call 此時是 AudioFile類,還需要轉成AudioData類。
with clean_support_call as source:
  # Record the audio
  audio_data = recognizer.record(source, duration=x.x, offset=y.y) # duration為需要的音頻時間,offset為距離起始點的時間偏移,單位均為秒。
type(audio_data)

輸出:<class 'speech_recognition.AudioData'>

# Transcribe speech using Google web API 
recognizer.recognize_google(audio_data=audio_file, language="en-US") # 由于谷歌容易被墻,改成微軟
recognizer.recognize_bing(audio_data=audio_file, language="en-US", key="xxxx") # key為微軟服務對應的key(要配置Azure服務),該函數實現里面的等待時間需要加長,URL可能也需要改一下。language可以改為別的語言,具體參考官方的語言命名。比如中文:zh-CN。非說話聲音(熊的叫聲)可能的得到一個空返回。

輸出:hello I'd like to get some.


# 多說話人的情況情況
# Import an audio file with multiple speakers
multiple_speakers = sr.AudioFile("multiple-speakers.wav")
# Convert AudioFile to AudioData
with multiple_speakers as source:
    multiple_speakers_audio = recognizer.record(source)
# Recognize the AudioData
recognizer.recognize_google(multiple_speakers_audio)

輸出:one of the limitations of the speech recognition library is that it doesn't recognise different speakers and voices it will just return it all as one block of text

# Import audio files separately
speakers = [sr.AudioFile("s0.wav"), sr.AudioFile("s1.wav"), sr.AudioFile("s2.wav")]
# Transcribe each speaker individually
for i, speaker in enumerate(speakers):
  with speaker as source:
        speaker_audio = recognizer.record(source)
  print(f"Text from speaker {i}: {recognizer.recognize_google(speaker_audio)}"

輸出:Text from speaker 0: one of the limitations of the speech recognition library Text from speaker 1: is that it doesn't recognise different speakers and voices Text from speaker 2: it will just return it all as one block a text

# 帶噪聲情況
# Import audio file with background nosie
noisy_support_call = sr.AudioFile(noisy_support_call.wav)
with noisy_support_call as source:# Adjust for ambient noise and record 
  recognizer.adjust_for_ambient_noise(source, duration=0.5)
  noisy_support_call_audio = recognizer.record(source)
# Recognize the audio
recognizer.recognize_google(noisy_support_call_audio)

輸出:hello ID like to get some help setting up my calories


更多!結合pydub那部分一起學習:

創建一些API函數供使用

# Import os module
import os
# Check the folder of audio files
os.listdir("acme_audio_files")
#輸出:(['call_1.mp3',  'call_2.mp3',  'call_3.mp3',  'call_4.mp3'])

import speech_recognition as sr
from pydub import AudioSegment
# Import call 1 and convert to .wav
call_1 = AudioSegment.from_file("acme_audio_files/call_1.mp3")
call_1.export("acme_audio_files/call_1.wav", format="wav")
# Transcribe call 1
recognizer = sr.Recognizer()
call_1_file = sr.AudioFile("acme_audio_files/call_1.wav")
with call_1_file as source:
    call_1_audio = recognizer.record(call_1_file)
recognizer.recognize_google(call_1_audio)

Functions we'll create:

  • convert_to_wav() converts non-.wav files to.wav files.
  • show_pydub_stats() shows the audio atrributes of a .wav file.
  • transcribe_audio() uses recognize_google() to transcribe a.wav file.
# Create function to convert audio file to wav
def convert_to_wav(filename):
  # "Takes an audio file of non .wav format and converts to .wav"
  # Import audio file  
  audio = AudioSegment.from_file(filename)
  # Create new filename  
  new_filename = filename.split(".")[0] + ".wav"
  # Export file as .wav  
  audio.export(new_filename, format="wav")  
  print(f"Converting {filename} to {new_filename}...")

convert_to_wav("acme_studios_audio/call_1.mp3")
#輸出:Converting acme_audio_files/call_1.mp3 to acme_audio_files/call_1.wav...

def show_pydub_stats(filename):
  # "Returns different audio attributes related to an audio file."
  # Create AudioSegment instance  
  audio_segment = AudioSegment.from_file(filename)
  # Print attributes  
  print(f"Channels: {audio_segment.channels}")
  print(f"Sample width: {audio_segment.sample_width}")
  print(f"Frame rate (sample rate): {audio_segment.frame_rate}")
  print(f"Frame width: {audio_segment.frame_width}")
  print(f"Length (ms): {len(audio_segment)}")  
  print(f"Frame count: {audio_segment.frame_count()}")

show_pydub_stats("acme_audio_files/call_1.wav")
#輸出:Channels: 2 Sample width: 2 Frame rate (sample rate): 32000 Frame width: 4 Length (ms): 54888 Frame count: 1756416.0

# Create a function to transcribe audio
def transcribe_audio(filename):
  # "Takes a .wav format audio file and transcribes it to text."
  # Setup a recognizer instance
  recognizer = sr.Recognizer()
  # Import the audio file and convert to audio data
  audio_file = sr.AudioFile(filename)
  with audio_file as source:
    audio_data = recognizer.record(audio_file)
  # Return the transcribed text
  return recognizer.recognize_google(audio_data)

transcribe_audio("acme_audio_files/call_1.wav")
#輸出:"hello welcome to Acme studio support line my name is Daniel how can I best help you hey Daniel this is John I've recently bought a smart from you guys and I know that's not good to hear John let's let's get your cell number and then we can we can set up a way to fix it for you one number for 1757 varies how long do you reckon this is going to take about an hour now while John we're going to try our best hour I will we get the sealing member will start up this support case I'm just really really really really I've been trying to contact 34 been put on hold more than an hour and half so I'm not really happy I kind of wanna get this issue 6 is fossil"

情感分析

$ pip install nltk

# Download required NLTK packages
import nltk
nltk.download("punkt")
nltk.download("vader_lexicon")
# Import sentiment analysis class
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# Create sentiment analysis instance
sid = SentimentIntensityAnalyzer()
# Test sentiment analysis on negative text
print(sid.polarity_scores("This customer service is terrible."))
#輸出:{'neg': 0.437, 'neu': 0.563, 'pos': 0.0, 'compound': -0.4767
# Transcribe customer channel of call_3
call_3_channel_2_text = transcribe_audio("call_3_channel_2.wav")
print(call_3_channel_2_text)
#輸出:"hey Dave is this any better do I order products are currently on July 1st and I haven't received the product a three-week step down this parable 6987 5"
# Sentiment analysis on customer channel of call_3
sid.polarity_scores(call_3_channel_2_text){'neg': 0.0, 'neu': 0.892, 'pos': 0.108, 'compound': 0.4404}
call_3_paid_api_text = "Okay. Yeah. Hi, Diane. This is paid on this call and obvi...
# Import sent tokenizer
from nltk.tokenize import sent_tokenize
# Find sentiment on each sentence
for sentence in sent_tokenize(call_3_paid_api_text):
  print(sentence)
  print(sid.polarity_scores(sentence))

# 輸出:Okay.{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.2263}Yeah.{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.296}Hi, Diane.{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}This is paid on this call and obviously the status of my orders at three weeks ago, and that service is terrible.{'neg': 0.129, 'neu': 0.871, 'pos': 0.0, 'compound': -0.4767}Is this any better?{'neg': 0.0, 'neu': 0.508, 'pos': 0.492, 'compound': 0.4404} Yes...
# Install spaCy
$ pip install spacy
# Download spaCy language model
$ python -m spacy download en_core_web_s

import spacy
# Load spaCy language model
nlp = spacy.load("en_core_web_sm")
# Create a spaCy doc
doc = nlp("I'd like to talk about a smartphone I ordered on July 31st from your Sydney store, my order number is 40939440. I spoke to Georgia about it last week.")

# Show different tokens and positionsfor token in doc:  
print(token.text, token.idx)
#輸出:
# I 0
# 'd 1
# like 4
# to 9
# talk 12
# about 17
# a 23
# smartphone 25...

# Show sentences in doc
for sentences in doc.sents:
  print(sentence)

#輸出:I'd like to talk about a smartphone I ordered on July 31st from your Sydney store, my order number is 4093829. I spoke to one of your customer service team, Georgia, yesterday.

Some of spaCy's built-in named entities:

  • PERSON People, including fictional.
  • ORG Companies, agencies, institutions, etc.
  • GPE Countries, cities, states.
  • PRODUCT Objects, vehicles, foods, etc. (Not services.)
  • DATE Absolute or relative dates or periods.
  • TIME Times smaller than a day.
  • MONEY Monetary values, including unit.
  • CARDINAL Numerals that do not fall under another type.
# Find named entities in doc
for entity in doc.ents:
  print(entity.text, entity.label_)

#輸出:
# July 31st DATE
# Sydney GPE
# 4093829 CARDINAL
# one CARDINAL
# Georgia GPE
# yesterday DATE
# Import EntityRuler class
from spacy.pipeline import EntityRuler
# Check spaCy pipeline
print(nlp.pipeline)
#輸出:[('tagger', <spacy.pipeline.pipes.Tagger at 0x1c3aa8a470>), ('parser', <spacy.pipeline.pipes.DependencyParser at 0x1c3bb60588>), ('ner', <spacy.pipeline.pipes.EntityRecognizer at 0x1c3bb605e8>)]
# Create EntityRuler instance
ruler = EntityRuler(nlp)
# Add token pattern to ruler
ruler.add_patterns([{"label":"PRODUCT", "pattern": "smartphone"}])
# Add new rule to pipeline before ner
nlp.add_pipe(ruler, before="ner")
# Check updated 
pipelinenlp.pipeline
#輸出:[('tagger', <spacy.pipeline.pipes.Tagger at 0x1c1f9c9b38>), ('parser', <spacy.pipeline.pipes.DependencyParser at 0x1c3c9cba08>), ('entity_ruler', <spacy.pipeline.entityruler.EntityRuler at 0x1c1d834b70>), ('ner', <spacy.pipeline.pipes.EntityRecognizer at 0x1c3c9cba68>)

# Test new entity rule
for entity in doc.ents:
    print(entity.text, entity.label_)
#輸出:
# smartphone PRODUCT
# July 31st DATE
# Sydney GPE
# 4093829 CARDINAL
# one CARDINAL
# Georgia GPE
# yesterday DAT

sklearn分類

# Inspect post purchase audio folder
import os
post_purchase_audio = os.listdir("post_purchase")
print(post_purchase_audio[:5])
#輸出:['post-purchase-audio-0.mp3',  'post-purchase-audio-1.mp3',  'post-purchase-audio-2.mp3',  'post-purchase-audio-3.mp3',  'post-purchase-audio-4.mp3']

# Loop through mp3 files
for file in post_purchase_audio:
  print(f"Converting {file} to .wav...")
  # Use previously made function to convert to .wav
  convert_to_wav(file)

#輸出:Converting post-purchase-audio-0.mp3 to .wav...Converting post-purchase-audio-1.mp3 to .wav...Converting post-purchase-audio-2.mp3 to .wav...Converting post-purchase-audio-3.mp3 to .wav...Converting post-purchase-audio-4.mp3 to .wav...

# Transcribe text from wav files
def create_text_list(folder):
  text_list = []
  # Loop through folder
  for file in folder:
  # Check for .wav extension
  if file.endswith(".wav"):
    # Transcribe audio
      text = transcribe_audio(file)
    # Add transcribed text to list
      text_list.append(text)return text_list

# Convert post purchase audio to textpost_purchase_text = create_text_list(post_purchase_audio)
#輸出:print(post_purchase_text[:5])['hey man I just water product from you guys and I think is amazing but I leave a li 'these clothes I just bought from you guys too small is there anyway I can change t "I recently got these pair of shoes but they're too big can I change the size", "I bought a pair of pants from you guys but they're way too small", "I bought a pair of pants and they're the wrong colour is there any chance I can ch

import pandas as pd
# Create post purchase dataframe
post_purchase_df = pd.DataFrame({"label": "post_purchase", "text": post_purchase_text})
# Create pre purchase dataframe
pre_purchase_df = pd.DataFrame({"label": "pre_purchase", "text": pre_purchase_text})
# Combine pre purchase and post purhcase
df = pd.concat([post_purchase_df, pre_purchase_df])
# View the combined dataframedf.head()

  label                                               text
0  post_purchase  yeah hello someone this morning delivered a pa...
1  post_purchase  my shipment arrived yesterday but it's not the...
2  post_purchase  hey my name is Daniel I received my shipment y...
3  post_purchase  hey mate how are you doing I'm just calling in...
4   pre_purchase  hey I was wondering if you know where my new p...

# Import text classification packages
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split# Split data into train and test setsX_train, X_test, y_train, y_test = train_test_split(    X=df["text"],    y=df["label"],    test_size=0.3)

# Create text classifier pipeline
text_classifier = Pipeline([  ("vectorizer", CountVectorizer()),  ("tfidf", TfidfTransformer()),  ("classifier", MultinomialNB())])
# Fit the classifier pipeline on the training 
datatext_classifier.fit(X_train, y_train)

# Make predictions and compare them to test labels
predictions = text_classifier.predict(X_test)
accuracy = 100 * np.mean(predictions == y_test.label)
print(f"The model is {accuracy:.2f}% accurate.")
#輸出:The model is 97.87% accurate.

后續應該做的事:

  • Practice your skills with a project of your own.
  • Check out speech_recognition's Microphone() class.
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容