[docker 網(wǎng)絡(luò)] docker跨主機網(wǎng)絡(luò) ovs gre 測試2

1. 前言

本文承接上文[docker 網(wǎng)絡(luò)] docker跨主機網(wǎng)絡(luò) ovs gre 測試1, 將對ovs使用gre實現(xiàn)docker跨主機容器訪問繼續(xù)進行測試. [docker 網(wǎng)絡(luò)] docker跨主機網(wǎng)絡(luò) ovs gre 測試1中容器的網(wǎng)絡(luò)在同一個子網(wǎng)中, 本文將會測試容器在不同子網(wǎng)中如何實現(xiàn)的.

需要對docker網(wǎng)絡(luò)類型有基本了解, 可以參考[mydocker]---docker的四種網(wǎng)絡(luò)模型與原理實現(xiàn)(1)[mydocker]---docker的四種網(wǎng)絡(luò)模型與原理實現(xiàn)(2).

1.1 當前環(huán)境

vm1

[root@vm1 ~]# cat /proc/sys/net/ipv4/ip_forward
0
[root@vm1 ~]# iptables -t nat -F
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
[root@vm1 ~]# ovs-vsctl show
91e815a1-1021-4c97-a21c-893ab8c28e37
    ovs_version: "2.5.1"
[root@vm1 ~]# 

vm2

[root@vm2 ~]# echo 0 > /proc/sys/net/ipv4/ip_forward
[root@vm2 ~]# 
[root@vm2 ~]# cat /proc/sys/net/ipv4/ip_forward
0
[root@vm2 ~]# iptables -t nat -F
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
[root@vm2 ~]# ovs-vsctl show
533800d4-246f-4099-a776-8254610db91f
    ovs_version: "2.5.1"
[root@vm2 ~]# 

2. ovs 容器在不同子網(wǎng)中

2.1 vm1中設(shè)置

[root@vm1 ~]# ip link add docker0 type bridge
[root@vm1 ~]# ip addr add 172.17.1.254/24 dev docker0
[root@vm1 ~]# ip link set docker0 up
[root@vm1 ~]# ip netns add ns1
[root@vm1 ~]# ip link add veth0 type veth peer name veth1
[root@vm1 ~]# brctl addif docker0 veth0
[root@vm1 ~]# ip link set veth1 netns ns1
[root@vm1 ~]# ip link set veth0 up
[root@vm1 ~]# ip netns exec ns1 sh
sh-4.2# ip addr add 172.17.1.1/24 dev veth1
sh-4.2# ip link set veth1 up
sh-4.2# ip link set lo up
sh-4.2# route add default gw 172.17.1.254
sh-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.1.254    0.0.0.0         UG    0      0        0 veth1
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 veth1
sh-4.2# ping -c 1 172.17.1.254
PING 172.17.1.254 (172.17.1.254) 56(84) bytes of data.
64 bytes from 172.17.1.254: icmp_seq=1 ttl=64 time=0.078 ms

--- 172.17.1.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms
sh-4.2# ping -c 1 172.19.0.12
PING 172.19.0.12 (172.19.0.12) 56(84) bytes of data.
64 bytes from 172.19.0.12: icmp_seq=1 ttl=64 time=0.048 ms

--- 172.19.0.12 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
sh-4.2# exit
exit
[root@vm1 ~]# 

2.2 vm2中設(shè)置

[root@vm2 ~]# ip link add docker0 type bridge
[root@vm2 ~]# ip addr add 192.168.2.254/24 dev docker0
[root@vm2 ~]# ip link set docker0 up
[root@vm2 ~]# ip netns add ns1
[root@vm2 ~]# ip link add veth0 type veth peer name veth1
[root@vm2 ~]# brctl addif docker0 veth0
[root@vm2 ~]# ip link set veth1 netns ns1
[root@vm2 ~]# ip link set veth0 up
[root@vm2 ~]# ip netns exec ns1 sh
sh-4.2# ip addr add 192.168.2.1/24 dev veth1
sh-4.2# ip link set veth1 up
sh-4.2# ip link set lo up
sh-4.2# route add default gw 192.168.2.254
sh-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.2.254   0.0.0.0         UG    0      0        0 veth1
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 veth1
sh-4.2# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.
64 bytes from 192.168.2.254: icmp_seq=1 ttl=64 time=0.052 ms

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
sh-4.2# ping -c 1 172.19.0.8
PING 172.19.0.8 (172.19.0.8) 56(84) bytes of data.
64 bytes from 172.19.0.8: icmp_seq=1 ttl=64 time=0.031 ms

--- 172.19.0.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
sh-4.2# exit
exit
[root@vm2 ~]# 
圖片.png

2.3 加入gre配置

很顯然目前兩個容器是ping不通的.

[root@vm1 ~]# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# 
2.3.1 增加一個ovs網(wǎng)橋

vm1

[root@vm1 ~]# ovs-vsctl add-br ovs1
[root@vm1 ~]# 
[root@vm1 ~]# ovs-vsctl add-port ovs1 rou1 -- set interface rou1 type=internal
[root@vm1 ~]# 
[root@vm1 ~]# ifconfig rou1 192.168.1.1/24
[root@vm1 ~]# 
[root@vm1 ~]# ovs-vsctl show
91e815a1-1021-4c97-a21c-893ab8c28e37
    Bridge "ovs1"
        Port "rou1"
            Interface "rou1"
                type: internal
        Port "ovs1"
            Interface "ovs1"
                type: internal
    ovs_version: "2.5.1"
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1
[root@vm1 ~]# 

vm2

[root@vm2 ~]# ovs-vsctl add-br ovs2
[root@vm2 ~]# ovs-vsctl add-port ovs2 rou2 -- set interface rou2 type=internal
[root@vm2 ~]# ifconfig rou2 192.168.1.2/24
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou2
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
[root@vm2 ~]# ovs-vsctl show
533800d4-246f-4099-a776-8254610db91f
    Bridge "ovs2"
        Port "rou2"
            Interface "rou2"
                type: internal
        Port "ovs2"
            Interface "ovs2"
                type: internal
    ovs_version: "2.5.1"

在vm1中訪問vm2中的rou2

[root@vm1 ~]# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.036 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms
[root@vm1 ~]# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
From 192.168.1.1 icmp_seq=1 Destination Host Unreachable

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[root@vm1 ~]# 
圖片.png
2.3.2 加入gre tunnel

vm1

[root@vm1 ~]# ovs-vsctl add-port ovs1 gre1 -- set interface gre1 type=gre options:remote_ip=172.19.0.8
[root@vm1 ~]# 
[root@vm1 ~]# ovs-vsctl show
91e815a1-1021-4c97-a21c-893ab8c28e37
    Bridge "ovs1"
        Port "gre1"
            Interface "gre1"
                type: gre
                options: {remote_ip="172.19.0.8"}
        Port "rou1"
            Interface "rou1"
                type: internal
        Port "ovs1"
            Interface "ovs1"
                type: internal
    ovs_version: "2.5.1"
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1

vm2

[root@vm2 ~]# ovs-vsctl add-port ovs2 gre2 -- set interface gre2 type=gre options:remote_ip=172.19.0.12
[root@vm2 ~]# ovs-vsctl show
533800d4-246f-4099-a776-8254610db91f
    Bridge "ovs2"
        Port "gre2"
            Interface "gre2"
                type: gre
                options: {remote_ip="172.19.0.12"}
        Port "rou2"
            Interface "rou2"
                type: internal
        Port "ovs2"
            Interface "ovs2"
                type: internal
    ovs_version: "2.5.1"
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou2
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
[root@vm2 ~]# 
圖片.png

測試

// 訪問vm2中的rou2
[root@vm1 ~]# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=1.30 ms

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.309/1.309/1.309/0.000 ms
// 訪問本機中的rou1
[root@vm1 ~]# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.026 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
[root@vm1 ~]# 

vm2

// 訪問vm1中的rou1
[root@vm2 ~]# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.691 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms
// 訪問本機中的rou2
[root@vm2 ~]# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.028 ms

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
[root@vm2 ~]# 
2.3.3 將ovs 綁定到 docker0中

vm1

[root@vm1 ~]# brctl addif docker0 ovs1
[root@vm1 ~]# ip link set ovs1 up
[root@vm1 ~]# bridge link
16: veth0 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2 
22: ovs1 state UNKNOWN : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 100 
[root@vm1 ~]# 

vm2

[root@vm2 ~]# brctl addif docker0 ovs2
[root@vm2 ~]# ip link set ovs2 up
[root@vm2 ~]# bridge link
16: veth0 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2 
22: ovs2 state UNKNOWN : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 100 
[root@vm1 ~]# 
圖片.png

測試, 還是ping不通, 這是因為不在同一個網(wǎng)絡(luò)上, 在[[docker 網(wǎng)絡(luò)]ovs gre 測試1]中已經(jīng)測試過.

[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# 
2.3.4 加入路由

vm1

[root@vm1 ~]# route add -net 192.168.2.0/24 dev rou1
[root@vm1 ~]# 
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1

vm2

[root@vm2 ~]# route add -net 172.17.1.0/24 dev rou2
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 rou2
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou2
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
[root@vm2 ~]# 

測試 從vm1中訪問

// ping vm2中的rou2可以通 
[root@vm1 ~]# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.
64 bytes from 192.168.2.254: icmp_seq=1 ttl=64 time=1.11 ms

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.111/1.111/1.111/0.000 ms
// ping vm2中的ns1不通 因為vm2中沒有開通ip_forward功能
[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

在vm2中加入ip_forward功能

[root@vm2 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

再次測試

[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=63 time=0.709 ms

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms

同樣的道理, 也需要給vm1加入ip_forward功能

[root@vm1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
圖片.png
2.3.5 容器之間訪問
[root@vm1 ~]# ip netns exec ns1 sh
sh-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.1.254    0.0.0.0         UG    0      0        0 veth1
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 veth1
// 訪問本機的docker0
sh-4.2# ping -c 1 172.17.1.254
PING 172.17.1.254 (172.17.1.254) 56(84) bytes of data.
64 bytes from 172.17.1.254: icmp_seq=1 ttl=64 time=0.050 ms

--- 172.17.1.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
// 訪問本機的ns1
sh-4.2# ping -c 1 172.17.1.1
PING 172.17.1.1 (172.17.1.1) 56(84) bytes of data.
64 bytes from 172.17.1.1: icmp_seq=1 ttl=64 time=0.026 ms

--- 172.17.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
// 訪問本機的ip
sh-4.2# ping -c 1 172.19.0.12
PING 172.19.0.12 (172.19.0.12) 56(84) bytes of data.
64 bytes from 172.19.0.12: icmp_seq=1 ttl=64 time=0.038 ms

--- 172.19.0.12 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
// 訪問本機的rou1
sh-4.2# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.044 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
// 訪問vm2的rou2
sh-4.2# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.760 ms

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.760/0.760/0.760/0.000 ms
// 訪問vm2的docker0
sh-4.2# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.
64 bytes from 192.168.2.254: icmp_seq=1 ttl=64 time=0.353 ms

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms
// 訪問vm2的ns1
sh-4.2# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=63 time=0.624 ms

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms
// 訪問vm2的ip 需要加入iptables規(guī)則
sh-4.2# ping -c 1 172.19.0.8
PING 172.19.0.8 (172.19.0.8) 56(84) bytes of data.

--- 172.19.0.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

2.3.6 加入iptables規(guī)則
[root@vm1 ~]# iptables -t nat -A POSTROUTING -s 172.17.1.0/24 -o eth0 -j MASQUERADE
[root@vm1 ~]# 
[root@vm1 ~]# ip netns exec ns1 ping -c 1 172.19.0.8
PING 172.19.0.8 (172.19.0.8) 56(84) bytes of data.
64 bytes from 172.19.0.8: icmp_seq=1 ttl=63 time=0.380 ms

--- 172.19.0.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms
[root@vm1 ~]# 

給vm2中加入

[root@vm2 ~]# iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE
圖片.png

3. 參考

1. https://blog.csdn.net/wodeamd1/article/details/81282437
2. https://blog.csdn.net/song7999/article/details/80403527
3. Docker 容器與容器云
4. https://blog.csdn.net/qq_27366789/article/details/83348366

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,923評論 6 535
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 98,740評論 3 420
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,856評論 0 380
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,175評論 1 315
  • 正文 為了忘掉前任,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,931評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 55,321評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,383評論 3 443
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 42,533評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 49,082評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 40,891評論 3 356
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,067評論 1 371
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,618評論 5 362
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 44,319評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,732評論 0 27
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,987評論 1 289
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,794評論 3 394
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,076評論 2 375

推薦閱讀更多精彩內(nèi)容