遇到一個(gè)需求是需要將整個(gè)表的數(shù)據(jù)逐行讀取出來(lái)進(jìn)行處理。
使用 limit
一開(kāi)始實(shí)現(xiàn)是使用limit來(lái)做,每個(gè)循環(huán)結(jié)束后將offset+= 100,如下:
PreparedStatement pStatement = dm.prepareStatement("SELECT start_time,input_params FROM execution_jobs limit ? , ? ";
pStatement.setInt(offset, 100);
ResultSet rs = pStatement.executeQuery();
這個(gè)方法在offset變得越來(lái)越大之后,查詢會(huì)非常緩慢。因?yàn)?select * from XXX limit 10000,10; 相當(dāng)于掃描了滿足條件的前10000行之后,丟掉,然后再讀取10行。所以性能會(huì)非常差。
針對(duì)limit 查詢性能慢的優(yōu)化辦法有很多。下面詳細(xì)介紹:
-
子查詢法
?先找出第一條數(shù)據(jù),然后大于等于這條數(shù)據(jù)的id就是要獲取的數(shù)據(jù)
?缺點(diǎn):數(shù)據(jù)必須是連續(xù)的,可以說(shuō)不能有where條件,where條件會(huì)篩選數(shù)據(jù),導(dǎo)致數(shù)據(jù)失去連續(xù)性
mysql> set profiling=1; # 開(kāi)啟profile
Query OK, 0 rows affected (0.00 sec)
mysql> pager grep !~- #關(guān)閉stdout
PAGER set to 'grep !~-'
mysql> select exec_id ,project_id from execution_jobs limit 100000,100;
100 rows in set (2.65 sec)
mysql> select exec_id ,project_id from execution_jobs where exec_id >= (select exec_id from execution_jobs limit 100000,1) limit 100;
100 rows in set (0.52 sec)
mysql> nopager #打開(kāi)stdout
PAGER set to stdout
mysql> show profiles; #查看耗時(shí)
+----------+------------+---------------+----------------+--------------------------------------------------------------------------------------------------------------------------------+
| Query_ID | Duration | Logical_reads | Physical_reads | Query |
+----------+------------+---------------+----------------+--------------------------------------------------------------------------------------------------------------------------------+
| 1 | 2.67572900 | 13337 | 410 | select exec_id ,project_id from execution_jobs limit 100000,100 |
| 2 | 0.53610725 | 13026 | 264 | select exec_id ,project_id from execution_jobs where exec_id >= (select exec_id from execution_jobs limit 100000,1) limit 100 | |
+----------+------------+---------------+----------------+--------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.01 sec)
-
倒排表優(yōu)化法
倒排表法類似建立索引,用一張表來(lái)維護(hù)頁(yè)數(shù),然后通過(guò)高效的連接得到數(shù)據(jù)
?缺點(diǎn):只適合數(shù)據(jù)數(shù)固定的情況,數(shù)據(jù)不能刪除,維護(hù)頁(yè)表困難 反向查找優(yōu)化法
?當(dāng)偏移超過(guò)一半記錄數(shù)的時(shí)候,先用排序,這樣偏移就反轉(zhuǎn)了
?缺點(diǎn):order by優(yōu)化比較麻煩,要增加索引,索引影響數(shù)據(jù)的修改效率,并且要知道總記錄數(shù),偏移大于數(shù)據(jù)的一半limit限制優(yōu)化法
把limit偏移量限制低于某個(gè)數(shù),超過(guò)這個(gè)數(shù)等于沒(méi)數(shù)據(jù)。
使用Mysql的流式查詢
用上邊的各種辦法優(yōu)化limit固然可行,但使用mysql的流查詢是更加優(yōu)越的實(shí)現(xiàn)這個(gè)需求的辦法。
默認(rèn)情況下,JDBC去操作mysql的時(shí)候,select語(yǔ)句都是將所有的結(jié)果緩存到內(nèi)存中,但是采用流式查詢的話,可以設(shè)置一個(gè)抓取數(shù)值,每次只讀取一小部分?jǐn)?shù)據(jù)。實(shí)現(xiàn)辦法是:
PreparedStatement pStatement = dm.prepareStatement("SELECT exec_id,project_id FROM execution_jobs", ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
pStatement.setFetchSize(Integer.MIN_VALUE);
ResultSet rs = pStatement.executeQuery();
while (rs.next()) {
// do something
}
這里 pStatement.setFetchSize(Integer.MIN_VALUE) 會(huì)讓人困惑。還是直接看mysql的源碼:
/**
* We only stream result sets when they are forward-only, read-only, and the
* fetch size has been set to Integer.MIN_VALUE
*
* @return true if this result set should be streamed row at-a-time, rather
* than read all at once.
*/
protected boolean createStreamingResultSet() {
try {
synchronized(checkClosed().getConnectionMutex()) {
return ((this.resultSetType == java.sql.ResultSet.TYPE_FORWARD_ONLY)
&& (this.resultSetConcurrency == java.sql.ResultSet.CONCUR_READ_ONLY) && (this.fetchSize == Integer.MIN_VALUE));
}
} catch (SQLException e) {
// we can't break the interface, having this be no-op in case of error is ok
return false;
}
}
從源碼我們可以看到,只有forward-only, read-only, fetchsize為Integer.MIN_VALUE 三者同時(shí)成立,才會(huì)開(kāi)啟流查詢方式??梢詤⒖歼@里的回答:
http://stackoverflow.com/questions/20899977/what-and-when-should-i-specify-setfetchsize
更多精彩:
Duval的GithubIO