HBase學(xué)習(xí)筆記(二)HBase架構(gòu)

HBase Architectural Components(HBase架構(gòu)組件)

HBase架構(gòu)也是主從架構(gòu),由三部分構(gòu)成HRegionServerHBase MasterZooKeeper

RegionServer負(fù)責(zé)數(shù)據(jù)的讀寫與客戶端交互,對于region的操作則是由HMaster處理,ZooKeeper則是負(fù)責(zé)維護(hù)運(yùn)行中的節(jié)點(diǎn)。

在底層,它將數(shù)據(jù)存儲(chǔ)于HDFS文件中,因而涉及到HDFS的NN、DN等。RegionServer會(huì)搭配HDFS中的DataNode節(jié)點(diǎn),可以將數(shù)據(jù)放在本節(jié)點(diǎn)的DataNode上。NameNode則是維護(hù)每個(gè)物理數(shù)據(jù)塊的元數(shù)據(jù)信息。

剛開始只要大致了解一下每個(gè)組件是干啥的,后面會(huì)進(jìn)行詳細(xì)介紹。

Physically, HBase is composed of three types of servers in a master slave type of architecture. Region servers serve data for reads and writes. When accessing data, clients communicate with HBase RegionServers directly. Region assignment, DDL (create, delete tables) operations are handled by the HBase Master process. Zookeeper, which is part of HDFS, maintains a live cluster state.

The Hadoop DataNode stores the data that the Region Server is managing. All HBase data is stored in HDFS files. Region Servers are collocated with the HDFS DataNodes, which enable data locality (putting the data close to where it is needed) for the data served by the RegionServers. HBase data is local when it is written, but when a region is moved, it is not local until compaction.

The NameNode maintains metadata information for all the physical data blocks that comprise the files.

Regions

上一篇文章說到HBase的一張表會(huì)被切分成若干塊,每塊叫做一個(gè)Region。每個(gè)Region中存儲(chǔ)著從startKey到endKey中間的記錄。這些Region會(huì)被分到集群的各個(gè)節(jié)點(diǎn)上去存儲(chǔ),每個(gè)節(jié)點(diǎn)叫做一個(gè)RegionServer,這些節(jié)點(diǎn)負(fù)責(zé)數(shù)據(jù)的讀寫,一個(gè)RegionServer可以處理大約1000個(gè)regions。

HBase Tables are divided horizontally by row key range into “Regions.” A region contains all rows in the table between the region’s start key and end key. Regions are assigned to the nodes in the cluster, called “Region Servers,” and these serve data for reads and writes. A region server can serve about 1,000 regions.

HBase HMaster

HMaster的功能是處理region的協(xié)調(diào)工作,具體包括以下的內(nèi)容:

  1. 管理RegionServer,實(shí)現(xiàn)其負(fù)載均衡。
  2. 管理和分配Region,比如在Region split時(shí)分配新的Region;在RegionServer退出時(shí)遷移其內(nèi)的Region到其他RegionServer上。
  3. 監(jiān)控集群中所有RegionServer的狀態(tài)(通過Heartbeat和監(jiān)聽ZooKeeper中的狀態(tài))
  4. 處理schema更新請求 (創(chuàng)建、刪除、修改Table的定義)。

Region assignment, DDL (create, delete tables) operations are handled by the HBase Master.

A master is responsible for:

  • Coordinating the region servers

    - Assigning regions on startup , re-assigning regions for recovery or load balancing

    - Monitoring all RegionServer instances in the cluster (listens for notifications from zookeeper)

  • Admin functions

    - Interface for creating, deleting, updating tables

ZooKeeper: The Coordinator

ZooKeeper作為協(xié)調(diào)者,負(fù)責(zé)維護(hù)節(jié)點(diǎn)的運(yùn)行狀態(tài),也就是哪個(gè)節(jié)點(diǎn)是運(yùn)行中的,哪個(gè)是已經(jīng)掛了的。每個(gè)節(jié)點(diǎn)周期性地像它發(fā)送心跳信息,從而讓它能時(shí)刻了解到每個(gè)節(jié)點(diǎn)的運(yùn)行情況。如果發(fā)現(xiàn)有節(jié)點(diǎn)出現(xiàn)異常情況,它會(huì)發(fā)出提醒。

HBase uses ZooKeeper as a distributed coordination service to maintain server state in the cluster. Zookeeper maintains which servers are alive and available, and provides server failure notification. Zookeeper uses consensus to guarantee common shared state. Note that there should be three or five machines for consensus.

How the Components Work Together(各組件的協(xié)調(diào)工作)

那么這些組件是怎么協(xié)同工作的呢?

ZooKeeper作為一個(gè)關(guān)系協(xié)調(diào)者,協(xié)調(diào)整個(gè)系統(tǒng)中各個(gè)組件的一些狀態(tài)信息。Region servers和運(yùn)行中的HMaster會(huì)跟ZooKeeper建立一個(gè)會(huì)話連接。ZooKeeper通過心跳信息來維持一個(gè)正在進(jìn)行會(huì)話的短時(shí)節(jié)點(diǎn)(ephemeral node)(還是直接看英文吧,翻譯總感覺說不清楚想要表達(dá)的意思)。

每個(gè)RegionServer會(huì)創(chuàng)建一個(gè)短時(shí)節(jié)點(diǎn)(ephemeral node),而HMaster會(huì)監(jiān)控這些節(jié)點(diǎn)來尋找空閑的節(jié)點(diǎn),同時(shí)它也會(huì)檢測這些節(jié)點(diǎn)當(dāng)中是不是有節(jié)點(diǎn)已經(jīng)掛了。HMaster會(huì)競爭創(chuàng)建短時(shí)節(jié)點(diǎn),ZooKeeper會(huì)決定哪個(gè)HMaster作為主節(jié)點(diǎn),并且時(shí)刻保持一個(gè)時(shí)間段只有一個(gè)活躍的HMaster。這個(gè)活躍的HMaster會(huì)向ZooKeeper發(fā)送心跳信息,而在之前競爭中失敗的HMaster就是非活躍的HMaster,他們時(shí)刻都想著取代那個(gè)唯一的活躍的HMaster,所以他們就一直注意監(jiān)聽ZooKeeper中有沒有那個(gè)活躍的HMaster掛了的通知。突然發(fā)現(xiàn)這個(gè)系統(tǒng)跟人類社會(huì)很相似啊,一個(gè)身居高位的人,底下有無數(shù)的人在盼望著他出丑下臺(tái)從而自己取代他。

如果一個(gè)RegionServer或者那個(gè)活躍的HMaster沒有發(fā)送心跳信息給ZooKeeper,那么建立的這個(gè)會(huì)話就會(huì)被關(guān)閉,與這些出問題節(jié)點(diǎn)有關(guān)的所有短時(shí)節(jié)點(diǎn)都會(huì)被刪除。會(huì)有相應(yīng)的監(jiān)聽器監(jiān)聽這些信息從而通知這些需要被刪除的節(jié)點(diǎn)。因?yàn)榛钴S的那個(gè)HMaster會(huì)監(jiān)聽這些RegionServer的狀態(tài),所以一旦他們出問題,HMaster就會(huì)想辦法恢復(fù)這些錯(cuò)誤。如果掛掉的是HMaster,那么很簡單,底下那些不活躍的HMaster會(huì)第一時(shí)間收到通知并且開始競爭這個(gè)“崗位”。

總結(jié)一下這個(gè)過程:

  1. 在HMaster和RegionServer連接到ZooKeeper后創(chuàng)建Ephemeral節(jié)點(diǎn),并使用Heartbeat機(jī)制維持這個(gè)節(jié)點(diǎn)的存活狀態(tài),如果某個(gè)Ephemeral節(jié)點(diǎn)失效,則HMaster會(huì)收到通知,并做相應(yīng)的處理。
  2. HMaster通過監(jiān)聽ZooKeeper中的Ephemeral節(jié)點(diǎn)來監(jiān)控HRegionServer的加入或宕機(jī)。
  3. 在第一個(gè)HMaster連接到ZooKeeper時(shí)會(huì)創(chuàng)建Ephemeral節(jié)點(diǎn)來表示Active的HMaster,其后加進(jìn)來的HMaster則監(jiān)聽該Ephemeral節(jié)點(diǎn),如果當(dāng)前Active的HMaster宕機(jī),則該節(jié)點(diǎn)消失,因而其他HMaster得到通知,而將自身轉(zhuǎn)換成Active的HMaster,在變?yōu)锳ctive的HMaster之前,它會(huì)創(chuàng)建在/hbase/back-masters/下創(chuàng)建自己的Ephemeral節(jié)點(diǎn)

Zookeeper is used to coordinate shared state information for members of distributed systems. Region servers and the active HMaster connect with a session to ZooKeeper. The ZooKeeper maintains ephemeral nodes for active sessions via heartbeats.

Each Region Server creates an ephemeral node. The HMaster monitors these nodes to discover available region servers, and it also monitors these nodes for server failures. HMasters vie to create an ephemeral node. Zookeeper determines the first one and uses it to make sure that only one master is active. The active HMaster sends heartbeats to Zookeeper, and the inactive HMaster listens for notifications of the active HMaster failure.

If a region server or the active HMaster fails to send a heartbeat, the session is expired and the corresponding ephemeral node is deleted. Listeners for updates will be notified of the deleted nodes. The active HMaster listens for region servers, and will recover region servers on failure. The Inactive HMaster listens for active HMaster failure, and if an active HMaster fails, the inactive HMaster becomes active.

HBase First Read or Write

HBase中有個(gè)特殊的日志表叫做META table,這個(gè)表里存儲(chǔ)著每個(gè)region在集群的哪個(gè)節(jié)點(diǎn)上,回想一下region是什么,它是一張完整的表被切分的每個(gè)數(shù)據(jù)塊。這張表的地址存儲(chǔ)在ZooKeeper上,也就是說這張表實(shí)際上是存在RegionServer中的,但具體是哪個(gè)RegionServer,只有ZooKeeper知道。

當(dāng)客戶端要讀寫數(shù)據(jù)的時(shí)候,無法避免的一個(gè)問題就是,我要訪問的數(shù)據(jù)在哪個(gè)節(jié)點(diǎn)上或者要寫到哪個(gè)節(jié)點(diǎn)上?這個(gè)問題的答案就與META table有關(guān)。

第一步,客戶端要先去ZooKeeper找到這這表存在哪個(gè)RegionServer上了。

第二步,去那個(gè)RegionServer上找,怎么找呢?當(dāng)然是根據(jù)要訪問數(shù)據(jù)的key來找。找到后客戶端會(huì)將這些數(shù)據(jù)以及META的地址存儲(chǔ)到它的緩存中,這樣下次再找到相同的數(shù)據(jù)時(shí)就不用再執(zhí)行前面的步驟了,直接去緩存中找就完成了,如果緩存里沒找到,再根據(jù)緩存的META地址去查META表就行了。

第三步,很簡單,第二步已經(jīng)知道鍵為key的數(shù)據(jù)在哪個(gè)RegionServer上了,直接去找就好了。

There is a special HBase Catalog table called the META table, which holds the location of the regions in the cluster. ZooKeeper stores the location of the META table.

This is what happens the first time a client reads or writes to HBase:

  1. The client gets the Region server that hosts the META table from ZooKeeper.
  2. The client will query the .META. server to get the region server corresponding to the row key it wants to access. The client caches this information along with the META table location.
  3. It will get the Row from the corresponding Region Server.

For future reads, the client uses the cache to retrieve the META location and previously read row keys. Over time, it does not need to query the META table, unless there is a miss because a region has moved; then it will re-query and update the cache.

HBase Meta Table

這個(gè)META表具體是什么呢?我們一起來看看,它里面是一個(gè)所有Region地址的列表。使用的數(shù)據(jù)結(jié)構(gòu)是b樹,key和value分別如下圖所示:

  • This META table is an HBase table that keeps a list of all regions in the system.

  • The .META. table is like a b tree.

  • The .META. table structure is as follows:

    - Key: region start key,region id

    - Values: RegionServer

Region Server Components(RegionServer組成)

下面來了解一下RegionServer的組成。一個(gè)RegionServer由WAL、BlockCache、MemStore和HFile構(gòu)成。

WAL:全程是Write Ahead Log,其實(shí)就是個(gè)日志文件,存儲(chǔ)在分布式系統(tǒng)上。這個(gè)日志是用來存儲(chǔ)那些還沒有被寫入硬盤永久保存的數(shù)據(jù),是用來進(jìn)行數(shù)據(jù)恢復(fù)的。換句話說,無論要寫什么數(shù)據(jù),都要現(xiàn)在這個(gè)日志里登記一下新數(shù)據(jù),道理很簡單,如果沒有日志文件,那么如果在寫數(shù)據(jù)的時(shí)候出問題了,比如停電啊這種故障,那么數(shù)據(jù)庫如何恢復(fù)數(shù)據(jù)呢?而先寫了日志之后,數(shù)據(jù)沒寫入硬盤沒關(guān)系,直接通過日志就知道之前進(jìn)行了哪些操作,通過對比日志和你現(xiàn)在數(shù)據(jù)庫中的值就能知道是在哪個(gè)地方出故障的,接下來就按照日志的記錄往后一步步執(zhí)行就好了,數(shù)據(jù)庫的數(shù)據(jù)就能恢復(fù)了。

BlockCache:它是讀緩存,它存儲(chǔ)了被經(jīng)常讀取的數(shù)據(jù),使用的是LRU算法,也就是說,緩沖區(qū)滿了之后,最近最少被使用的數(shù)據(jù)會(huì)被淘汰。

MemStore:是寫緩存,用于存儲(chǔ)還沒有被寫入到硬盤的數(shù)據(jù),并且是排序過的。每一個(gè)列族對應(yīng)一個(gè)MemStore。

HFile:它存儲(chǔ)了按照KeyValue格式的每行數(shù)據(jù)。

A Region Server runs on an HDFS data node and has the following components:

  • WAL: Write Ahead Log is a file on the distributed file system. The WAL is used to store new data that hasn't yet been persisted to permanent storage; it is used for recovery in the case of failure.
  • BlockCache: is the read cache. It stores frequently read data in memory. Least Recently Used data is evicted when full.
  • MemStore: is the write cache. It stores new data which has not yet been written to disk. It is sorted before writing to disk. There is one MemStore per column family per region.
  • Hfiles store the rows as sorted KeyValues on disk.

HBase Write Steps(HBase的寫入步驟)

當(dāng)客戶端發(fā)起一個(gè)put請求的時(shí)候,首先根據(jù)RowKey尋址,從META表中查出該P(yáng)ut數(shù)據(jù)最終需要去的RegionServer。

然后就到了下面的這個(gè)圖,客戶端把這個(gè)put操作先寫到RegionServer的WAL日志文件中。

一旦寫入到日志成功后,RegionServer會(huì)根據(jù)put請求中的TableName和RowKey找到對應(yīng)的Region,然后再根據(jù)column family找到該列族對應(yīng)的MemStore,將數(shù)據(jù)寫入MemStore。

最后,數(shù)據(jù)寫成功后,給客戶端一個(gè)應(yīng)答,表明數(shù)據(jù)已經(jīng)寫好了 。

HBase Write Steps (1)

When the client issues a Put request, the first step is to write the data to the write-ahead log, the WAL:

- Edits are appended to the end of the WAL file that is stored on disk.

- The WAL is used to recover not-yet-persisted data in case a server crashes.

HBase Write Steps (2)

Once the data is written to the WAL, it is placed in the MemStore. Then, the put request acknowledgement returns to the client.

HBase MemStore

上一步我們說到數(shù)據(jù)會(huì)寫到MemStore中,而MemStore是一個(gè)寫緩沖區(qū),什么意思呢?就是寫操作會(huì)先將數(shù)據(jù)寫到緩沖區(qū)中,而每條數(shù)據(jù)在緩沖區(qū)中是KeyValue結(jié)構(gòu)的,并且是按照key排序的。

The MemStore stores updates in memory as sorted KeyValues, the same as it would be stored in an HFile. There is one MemStore per column family. The updates are sorted per column family.

HBase Region Flush

緩沖區(qū)有一定的大小,如果緩沖區(qū)滿了,那么緩沖區(qū)的數(shù)據(jù)就會(huì)被flush到一個(gè)新的HFile文件中永久保存。HBase中,對于每個(gè)列族可以有多個(gè)HFile,HFile里的數(shù)據(jù)跟緩沖區(qū)中的格式也是一樣的,都是一個(gè)Key對應(yīng)一個(gè)Value的結(jié)構(gòu)。

需要注意的是,如果一個(gè)MemStore滿了,那么所有的MemStore都要將存儲(chǔ)的數(shù)據(jù)flush到HFile中,這也就是為什么官方建議一個(gè)表中不要有太多列族,因?yàn)槊總€(gè)列族對應(yīng)一個(gè)MemStore,如果列族太多的話會(huì)導(dǎo)致頻繁刷新緩沖區(qū)等性能問題。

緩沖區(qū)在刷新寫入到HFile的時(shí)候,還會(huì)保存一個(gè)序列數(shù)(sequence number),這個(gè)東西是干嘛的呢?其實(shí)是為了讓系統(tǒng)知道目前HFile上保存了哪些數(shù)據(jù)。這個(gè)序列數(shù)作為元數(shù)據(jù)(meta field)存在HFile中,所以每次在刷新的時(shí)候都等于給HFile做了個(gè)標(biāo)記。

When the MemStore accumulates enough data, the entire sorted set is written to a new HFile in HDFS. HBase uses multiple HFiles per column family, which contain the actual cells, or KeyValue instances. These files are created over time as KeyValue edits sorted in the MemStores are flushed as files to disk.

Note that this is one reason why there is a limit to the number of column families in HBase. There is one MemStore per CF; when one is full, they all flush. It also saves the last written sequence number so the system knows what was persisted so far.

The highest sequence number is stored as a meta field in each HFile, to reflect where persisting has ended and where to continue. On region startup, the sequence number is read, and the highest is used as the sequence number for new edits.

HBase HFile

緩沖區(qū)的數(shù)據(jù)都是根據(jù)key進(jìn)行排序的,所以在flush到HFile上的時(shí)候,就直接按書序一條一條記錄往下寫就行了,這樣順序?qū)懙倪^程是非常快速的,因?yàn)樗苊饬舜疟P磁頭的移動(dòng)。

Data is stored in an HFile which contains sorted key/values. When the MemStore accumulates enough data, the entire sorted KeyValue set is written to a new HFile in HDFS. This is a sequential write. It is very fast, as it avoids moving the disk drive head.

HBase HFile Structure

HFile的組成就相對來說比較復(fù)雜了,因?yàn)橐紤]到查詢的性能,最好別出現(xiàn)把整個(gè)文件都掃描一遍后才發(fā)現(xiàn)要訪問的數(shù)據(jù)不再這個(gè)HFile中的情況。因此就需要在文件的組織形式上花點(diǎn)心思。怎樣在不完全掃描文件的情況下知道要訪問的數(shù)據(jù)在不再文件中呢?我們想到的答案可能是使用索引(Index)。HFile實(shí)際上也是這種思想,它使用的是多級索引,在形式上類似于b樹。不得不感慨一句,b樹這個(gè)數(shù)據(jù)結(jié)構(gòu)在數(shù)據(jù)庫中的應(yīng)用程度真的是很高啊!

An HFile contains a multi-layered index which allows HBase to seek to the data without having to read the whole file. The multi-level index is like a b+tree:

  • Key value pairs are stored in increasing order
  • Indexes point by row key to the key value data in 64KB “blocks”
  • Each block has its own leaf-index
  • The last key of each block is put in the intermediate index
  • The root index points to the intermediate index

The trailer points(位于文件最后) to the meta blocks, and is written at the end of persisting the data to the file. The trailer also has information like bloom filters(布隆過濾器) and time range info. Bloom filters help to skip files that do not contain a certain row key. The time range info is useful for skipping the file if it is not in the time range the read is looking for.

HFile Index

當(dāng)一個(gè)HFile被打開的時(shí)候,這個(gè)HFile的索引就被加載到BlockCache中了,還記得我們之前說BlockCache是什么嗎?就是讀緩沖區(qū)。

The index, which we just discussed, is loaded when the HFile is opened and kept in memory. This allows lookups to be performed with a single disk seek.

HBase Read Merge(HBase讀操作)

上面我們已經(jīng)討論了HBase的存儲(chǔ)結(jié)構(gòu),在涉及讀取操作的時(shí)候其實(shí)有個(gè)問題,一行數(shù)據(jù)可能存在的位置有哪些?一種是已經(jīng)永久存儲(chǔ)在HFile中了,一種是還沒來得及寫入到HFile,在緩沖區(qū)MemStore中,還有一種情況是讀緩存,也就是經(jīng)常讀取的數(shù)據(jù)會(huì)被放到讀緩存BlockCache中,那么讀取數(shù)據(jù)的操作怎么去查詢數(shù)據(jù)的存儲(chǔ)位置呢?有這么幾個(gè)步驟:

首先,設(shè)置讀緩存的位置是什么?當(dāng)時(shí)是為了高效地讀取數(shù)據(jù),所以讀緩存絕對是第一優(yōu)先級的,別忘了BlockCache中使用的是LRU算法。

其次,HFile和寫緩存,選哪個(gè)?HFile的個(gè)數(shù)那么多,當(dāng)然是效率最低的,而一個(gè)列族只有一個(gè)MemStore,效率必然比HFile高的多,所以它作為第二優(yōu)先級,也就是說如果讀緩存中沒有找到數(shù)據(jù),那么就去MemStore中去找。

最后,如果不幸上面兩步都沒能找到數(shù)據(jù),那沒辦法只能去HFile上找了。

We have seen that the KeyValue cells corresponding to one row can be in multiple places, row cells already persisted are in Hfiles, recently updated cells are in the MemStore, and recently read cells are in the Block cache. So when you read a row, how does the system get the corresponding cells to return? A Read merges Key Values from the block cache, MemStore, and HFiles in the following steps:

  1. First, the scanner looks for the Row cells in the Block cache - the read cache. Recently Read Key Values are cached here, and Least Recently Used are evicted when memory is needed.
  2. Next, the scanner looks in the MemStore, the write cache in memory containing the most recent writes.
  3. If the scanner does not find all of the row cells in the MemStore and Block Cache, then HBase will use the Block Cache indexes and bloom filters to load HFiles into memory, which may contain the target row cells.

As discussed earlier, there may be many HFiles per MemStore, which means for a read, multiple files may have to be examined, which can affect the performance. This is called read amplification.

HBase Minor Compaction

剛才我們說到寫數(shù)據(jù)的時(shí)候會(huì)先寫到緩沖區(qū),緩沖區(qū)滿了會(huì)將緩沖區(qū)的內(nèi)容沖刷到HFile中永久保存,試想這個(gè)寫數(shù)據(jù)的過程一直持續(xù),那么HFile的數(shù)量會(huì)越來越多,管理起來就會(huì)不太方便,就有了compaction這個(gè)操作,意思是壓緊壓實(shí)的意思,實(shí)在是沒找到合適的中文翻譯,這個(gè)名詞就不翻了。

HBase有兩種compaction,一種是Minor Compaction,另一種是Major Compaction。

Minor Compaction是將一些小的HFile文件合并成大文件,很明顯它可以減少HFile文件的數(shù)量,并且在這個(gè)過程中不會(huì)處理已經(jīng)Deleted或Expired的Cell。一次Minor Compaction的結(jié)果是更少并且更大的HFile。

HBase will automatically pick some smaller HFiles and rewrite them into fewer bigger Hfiles. This process is called minor compaction. Minor compaction reduces the number of storage files by rewriting smaller files into fewer but larger ones, performing a merge sort.

HBase Major Compaction

Major Compaction是指將所有屬于一個(gè)Region的HFile合并成一個(gè)HFile,也就是將同一列族的多個(gè)文件合并,在這個(gè)過程中,標(biāo)記為Deleted的Cell會(huì)被刪除,而那些已經(jīng)Expired的Cell會(huì)被丟棄,那些已經(jīng)超過最多版本數(shù)的Cell會(huì)被丟棄。但是這個(gè)合并操作會(huì)非常耗時(shí)。

Major compaction merges and rewrites all the HFiles in a region to one HFile per column family, and in the process, drops deleted or expired cells. This improves read performance; however, since major compaction rewrites all of the files, lots of disk I/O and network traffic might occur during the process. This is called write amplification.

Major compactions can be scheduled to run automatically. Due to write amplification, major compactions are usually scheduled for weekends or evenings. A major compaction also makes any data files that were remote, due to server failure or load balancing, local to the region server.

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 230,825評論 6 546
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 99,814評論 3 429
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 178,980評論 0 384
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 64,064評論 1 319
  • 正文 為了忘掉前任,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 72,779評論 6 414
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 56,109評論 1 330
  • 那天,我揣著相機(jī)與錄音,去河邊找鬼。 笑死,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 44,099評論 3 450
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 43,287評論 0 291
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 49,799評論 1 338
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 41,515評論 3 361
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 43,750評論 1 375
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 39,221評論 5 365
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 44,933評論 3 351
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 35,327評論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,667評論 1 296
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 52,492評論 3 400
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 48,703評論 2 380

推薦閱讀更多精彩內(nèi)容