鍍金池/ 問(wèn)答/Java/ elasticsearch平滑重啟的問(wèn)題

elasticsearch平滑重啟的問(wèn)題

按照官方的做法 做平滑重啟,關(guān)閉 shard allocation,然后關(guān)閉節(jié)點(diǎn),改配置再重啟,再打開(kāi)allocation,等待數(shù)秒分片恢復(fù)集群達(dá)到green,再重復(fù)這個(gè)過(guò)程重啟下一個(gè)節(jié)點(diǎn)。

一開(kāi)始都沒(méi)有問(wèn)題,但是當(dāng)我重啟到master這個(gè)節(jié)點(diǎn) (如下圖的node2799這個(gè)node) 的時(shí)候 集群直接不可用了(注意和狀態(tài)為red不同,red 好歹你可以通過(guò) _cluster/heal能看到,而這個(gè)狀態(tài)是所有請(qǐng)求都沒(méi)有響應(yīng)阻塞住了,kopf也打不開(kāi)了,直到重新選出主節(jié)點(diǎn)),直到一分鐘后有新的master被選舉出來(lái),集群恢復(fù)yellow狀態(tài)。我的集群是2.2.0版本的。
1384683f55cd21d0a2e707e9a9f894f0.png

我想知道有沒(méi)有辦法能避免這個(gè)問(wèn)題?畢竟有1分鐘的服務(wù)不可用還是會(huì)有一些影響的。

有朋友推薦我用
index.unassigned.node_left.delayed_timeout = 0
這個(gè)配置,讓集群在失去節(jié)點(diǎn)的時(shí)候 分片立即選主,但是我試過(guò)了 集群還是會(huì)不可用。 感覺(jué)不是分片的問(wèn)題,因?yàn)槲乙呀?jīng)確認(rèn)每個(gè)索引都有全量分片了,而且只有重啟


補(bǔ)充一下現(xiàn)象,重啟選舉出的master的時(shí)候,集群處于這樣的狀態(tài),無(wú)法訪問(wèn)索引:

c84b027d5bfb9e041d62dbf81533735f.png

而且剩余兩個(gè)節(jié)點(diǎn)會(huì)拋出異常:

[2018-01-29 14:43:53,947][DEBUG][transport.netty          ] [node2797] disconnecting from [{node2799}{SbMekoCpRSG0aJHEUTpcYw}{10.37.27.99}{10.37.27.99:8302}{zone=test}] due to explicit disconnect call
[2018-01-29 14:43:53,946][WARN ][discovery.zen.ping.unicast] [node2797] failed to send ping to [{node2799}{SbMekoCpRSG0aJHEUTpcYw}{10.37.27.99}{10.37.27.99:8302}{zone=test}]
RemoteTransportException[[node2799][10.37.27.99:8302][internal:discovery/zen/unicast]]; nested: IllegalStateException[received ping request while not started];
Caused by: java.lang.IllegalStateException: received ping request while not started
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.handlePingRequest(UnicastZenPing.java:497)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.access$2400(UnicastZenPing.java:83)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:522)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:518)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:244)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:114)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2018-01-29 14:43:53,947][DEBUG][cluster.service          ] [node2797] processing [zen-disco-master_failed ({node2799}{SbMekoCpRSG0aJHEUTpcYw}{10.37.27.99}{10.37.27.99:8302}{zone=test})]: took 5ms done applying updated cluster_state (version: 404, uuid: PRd4yp30SKKm-KPgv7aWow)
[2018-01-29 14:43:55,155][DEBUG][action.admin.cluster.state] [node2797] no known master node, scheduling a retry
[2018-01-29 14:43:55,161][DEBUG][action.admin.indices.get ] [node2797] no known master node, scheduling a retry
[2018-01-29 14:43:55,162][DEBUG][action.admin.cluster.state] [node2797] no known master node, scheduling a retry
[2018-01-29 14:43:55,165][DEBUG][action.admin.cluster.health] [node2797] no known master node, scheduling a retry
[2018-01-29 14:44:19,878][DEBUG][indices.memory           ] [node2797] recalculating shard indexing buffer, total is [778.2mb] with [2] active shards, each shard set to indexing=[389.1mb], translog=[64kb]
[2018-01-29 14:44:23,945][DEBUG][transport.netty          ] [node2797] connected to node [{#zen_unicast_4_CyzI-D1KQj6UlM6k2pP43A#}{10.37.27.99}{10.37.27.99:8302}{zone=test}]
[2018-01-29 14:44:23,945][DEBUG][transport.netty          ] [node2797] connected to node [{#zen_unicast_3#}{10.37.27.99}{10.37.27.99:8302}]
[2018-01-29 14:44:25,156][DEBUG][action.admin.cluster.state] [node2797] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2018-01-29 14:44:25,157][INFO ][rest.suppressed          ] /_cluster/state/master_node,routing_table,blocks/ Params: {metric=master_node,routing_table,blocks}
MasterNotDiscoveredException[null]
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:205)
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:794)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2018-01-29 14:44:25,162][DEBUG][action.admin.indices.get ] [node2797] timed out while retrying [indices:admin/get] after failure (timeout [30s])
[2018-01-29 14:44:25,162][INFO ][rest.suppressed          ] /_aliases Params: {index=_aliases}
MasterNotDiscoveredException[null]
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:205)
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:794)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2018-01-29 14:44:25,163][DEBUG][action.admin.cluster.state] [node2797] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2018-01-29 14:44:25,164][INFO ][rest.suppressed          ] /_cluster/settings Params: {}
MasterNotDiscoveredException[null]
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:205)
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:794)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2018-01-29 14:44:25,166][DEBUG][action.admin.cluster.health] [node2797] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
[2018-01-29 14:44:25,166][INFO ][rest.suppressed          ] /_cluster/health Params: {}
MasterNotDiscoveredException[null]
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:205)
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:794)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
回答
編輯回答
熟稔

已經(jīng)知道原因了,有一個(gè)配置我設(shè)置太小了
discovery.zen.ping_timeout: 60s
原先我是這么配置的,60s太長(zhǎng)了,這個(gè)影響的是master選舉時(shí)長(zhǎng),太長(zhǎng)會(huì)導(dǎo)致選舉時(shí)間過(guò)久,期間很多請(qǐng)求受影響。
改成10s后,緩解很多

2018年7月30日 22:22