Critical worker threads liveness checking drawbacks

classic Classic list List threaded Threaded
29 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Critical worker threads liveness checking drawbacks

Andrey Kuznetsov
Igniters,

Currently, we have a nearly completed implementation for system-critical
threads liveness checking [1], in terms of IEP-14 [2] and IEP-5 [3]. In a
nutshell, system-critical threads monitor each other and checks for two
aspects:
- whether a thread is alive;
- whether a thread is active, i.e. it updates its heartbeat timestamp
periodically.
When either check fails, critical failure handler is called, this in fact
means node stop.

The implementation of activity checks has a flaw now: some blocking actions
are parts of normal operation and should not lead to node stop, e.g.
- WAL writer thread can call {{fsync()}};
- any cache write that occurs in system striped executor can lead to
{{fsync()}} call again.
The former example can be fixed by disabling heartbeat checks temporarily
for known long-running actions, but it won't work with for the latter one.

I see a few options to address the issue:
- Just log any long-running action instead of calling critical failure
handler.
- Introduce several severity levels for long-running actions handling. Each
level will have its own failure handler. Depending on the level,
long-running action can lead to node stop, error logging or no-op reaction.

I encourage you to suggest other options. Any idea is appreciated.

[1] https://issues.apache.org/jira/browse/IGNITE-6587
[2]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling
[3]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=74683878

--
Best regards,
  Andrey Kuznetsov.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

yzhdanov
Andrey,

I don't understand your point. My opinion, the idea of these changes is to
make cluster more stable and responsive by eliminating hanged nodes. I
would not make too much difference between threads trapped in deadlock and
threads hanging on fsync calls for too long. Both situations lead to
increasing latency in cluster till its full unavailability.

So, killing node hanging on fsync may be reasonable. Agree?

You may implement the approach when you have warning messages in logs by
default, but termination option should also be available.

Thanks!

--Yakov

2018-09-06 17:02 GMT+03:00 Andrey Kuznetsov <[hidden email]>:

> Igniters,
>
> Currently, we have a nearly completed implementation for system-critical
> threads liveness checking [1], in terms of IEP-14 [2] and IEP-5 [3]. In a
> nutshell, system-critical threads monitor each other and checks for two
> aspects:
> - whether a thread is alive;
> - whether a thread is active, i.e. it updates its heartbeat timestamp
> periodically.
> When either check fails, critical failure handler is called, this in fact
> means node stop.
>
> The implementation of activity checks has a flaw now: some blocking actions
> are parts of normal operation and should not lead to node stop, e.g.
> - WAL writer thread can call {{fsync()}};
> - any cache write that occurs in system striped executor can lead to
> {{fsync()}} call again.
> The former example can be fixed by disabling heartbeat checks temporarily
> for known long-running actions, but it won't work with for the latter one.
>
> I see a few options to address the issue:
> - Just log any long-running action instead of calling critical failure
> handler.
> - Introduce several severity levels for long-running actions handling. Each
> level will have its own failure handler. Depending on the level,
> long-running action can lead to node stop, error logging or no-op reaction.
>
> I encourage you to suggest other options. Any idea is appreciated.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-6587
> [2]
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> 14+Ignite+failures+handling
> [3]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=74683878
>
> --
> Best regards,
>   Andrey Kuznetsov.
>
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Andrey Kuznetsov
Yakov,

Thanks for reply. Indeed, initial design assumed node termination when
hanging critical thread has been detected. But sometimes it looks
inappropriate. Let, for example fsync in WAL writer thread takes too long,
and we terminate the node. Upon rebalancing, this may lead to long fsyncs
on other nodes due to increased per node load, hence we can terminate the
next node as well. Eventually we can collapse the entire cluster. Is it a
possible scenario?

пт, 7 сент. 2018 г. в 18:44, Yakov Zhdanov <[hidden email]>:

> Andrey,
>
> I don't understand your point. My opinion, the idea of these changes is to
> make cluster more stable and responsive by eliminating hanged nodes. I
> would not make too much difference between threads trapped in deadlock and
> threads hanging on fsync calls for too long. Both situations lead to
> increasing latency in cluster till its full unavailability.
>
> So, killing node hanging on fsync may be reasonable. Agree?
>
> You may implement the approach when you have warning messages in logs by
> default, but termination option should also be available.
>
> Thanks!
>
> --Yakov
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

yzhdanov
Yes, and you should suggest solution, e.g. throttle rebalancing threads
more to produce less load.

What you suggesting kills the idea of this enhancement.

--Yakov

2018-09-07 19:03 GMT+03:00 Andrey Kuznetsov <[hidden email]>:

> Yakov,
>
> Thanks for reply. Indeed, initial design assumed node termination when
> hanging critical thread has been detected. But sometimes it looks
> inappropriate. Let, for example fsync in WAL writer thread takes too long,
> and we terminate the node. Upon rebalancing, this may lead to long fsyncs
> on other nodes due to increased per node load, hence we can terminate the
> next node as well. Eventually we can collapse the entire cluster. Is it a
> possible scenario?
>
> пт, 7 сент. 2018 г. в 18:44, Yakov Zhdanov <[hidden email]>:
>
> > Andrey,
> >
> > I don't understand your point. My opinion, the idea of these changes is
> to
> > make cluster more stable and responsive by eliminating hanged nodes. I
> > would not make too much difference between threads trapped in deadlock
> and
> > threads hanging on fsync calls for too long. Both situations lead to
> > increasing latency in cluster till its full unavailability.
> >
> > So, killing node hanging on fsync may be reasonable. Agree?
> >
> > You may implement the approach when you have warning messages in logs by
> > default, but termination option should also be available.
> >
> > Thanks!
> >
> > --Yakov
> >
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

David Harvey
In reply to this post by Andrey Kuznetsov
There are at least two production cases that need to be distinguished:
The first is where a single node restart will repair the problem( and you
get the right node.  )
The other cases are those where stopping the node will invalidate it's
backups, leaving only one copy of the data, and the problem is not
resolved.  Lots of opportunities to destroy all copies.      Automated
decisions should take into account whether a node in question is the last
source of Truth.

Killing off a single bad actor using automation is safer than having humans
with the CEO screaming at them to try.
-DH


PS:  I'm just finalizing an extension which allows cache templates created
in spring to force primary and backups to different failure
domains(availability zones) ( no need for custom Java code), and have been
fretting over all the ways to lose data.

On Thu, Sep 6, 2018, 10:03 AM Andrey Kuznetsov <[hidden email]> wrote:

> Igniters,
>
> Currently, we have a nearly completed implementation for system-critical
> threads liveness checking [1], in terms of IEP-14 [2] and IEP-5 [3]. In a
> nutshell, system-critical threads monitor each other and checks for two
> aspects:
> - whether a thread is alive;
> - whether a thread is active, i.e. it updates its heartbeat timestamp
> periodically.
> When either check fails, critical failure handler is called, this in fact
> means node stop.
>
> The implementation of activity checks has a flaw now: some blocking actions
> are parts of normal operation and should not lead to node stop, e.g.
> - WAL writer thread can call {{fsync()}};
> - any cache write that occurs in system striped executor can lead to
> {{fsync()}} call again.
> The former example can be fixed by disabling heartbeat checks temporarily
> for known long-running actions, but it won't work with for the latter one.
>
> I see a few options to address the issue:
> - Just log any long-running action instead of calling critical failure
> handler.
> - Introduce several severity levels for long-running actions handling. Each
> level will have its own failure handler. Depending on the level,
> long-running action can lead to node stop, error logging or no-op reaction.
>
> I encourage you to suggest other options. Any idea is appreciated.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-6587
> [2]
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling
> [3]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=74683878
>
> --
> Best regards,
>   Andrey Kuznetsov.
>
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

yzhdanov
Agree with David. We need to have an opporunity set backups count threshold
(at runtime also!) that will not allow any automatic stop if there will be
a data loss. Andrey, what do you think?

--Yakov
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Andrey Kuznetsov
David, Yakov, I understand your fears. But liveness checks deal with
_critical_ conditions, i.e. when such a condition is met we conclude the
node as totally broken, and there is no sense to keep it alive regardless
the data it contains. If we want to give it a chance, then the condition
(long fsync etc.) should not considered as critical at all.

сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <[hidden email]>:

> Agree with David. We need to have an opporunity set backups count threshold
> (at runtime also!) that will not allow any automatic stop if there will be
> a data loss. Andrey, what do you think?
>
> --Yakov
>


--
Best regards,
  Andrey Kuznetsov.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Andrey Gura-2
Hi,

I agree with Yakov that we can provide some option that manage worker
liveness checker behavior in case of observing that some worker is
blocked too long.
At least it will  some workaround for cases when node fails is too annoying.

Backups count threshold sounds good but I don't understand how it will
help in case of cluster hanging.

The simplest solution here is alert in cases of blocking of some
critical worker (we can improve WorkersRegistry for this purpose and
expose list of blocked workers) and optionally call system configured
failure processor. BTW, failure processor can be extended in order to
perform any checks (e.g. backup count) and decide whether it should
stop node or not.
On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <[hidden email]> wrote:

>
> David, Yakov, I understand your fears. But liveness checks deal with
> _critical_ conditions, i.e. when such a condition is met we conclude the
> node as totally broken, and there is no sense to keep it alive regardless
> the data it contains. If we want to give it a chance, then the condition
> (long fsync etc.) should not considered as critical at all.
>
> сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <[hidden email]>:
>
> > Agree with David. We need to have an opporunity set backups count threshold
> > (at runtime also!) that will not allow any automatic stop if there will be
> > a data loss. Andrey, what do you think?
> >
> > --Yakov
> >
>
>
> --
> Best regards,
>   Andrey Kuznetsov.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

David Harvey
It would be safer to restart the entire cluster than to remove the last
node for a cache that should be redundant.

On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]> wrote:

> Hi,
>
> I agree with Yakov that we can provide some option that manage worker
> liveness checker behavior in case of observing that some worker is
> blocked too long.
> At least it will  some workaround for cases when node fails is too
> annoying.
>
> Backups count threshold sounds good but I don't understand how it will
> help in case of cluster hanging.
>
> The simplest solution here is alert in cases of blocking of some
> critical worker (we can improve WorkersRegistry for this purpose and
> expose list of blocked workers) and optionally call system configured
> failure processor. BTW, failure processor can be extended in order to
> perform any checks (e.g. backup count) and decide whether it should
> stop node or not.
> On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <[hidden email]> wrote:
> >
> > David, Yakov, I understand your fears. But liveness checks deal with
> > _critical_ conditions, i.e. when such a condition is met we conclude the
> > node as totally broken, and there is no sense to keep it alive regardless
> > the data it contains. If we want to give it a chance, then the condition
> > (long fsync etc.) should not considered as critical at all.
> >
> > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <[hidden email]>:
> >
> > > Agree with David. We need to have an opporunity set backups count
> threshold
> > > (at runtime also!) that will not allow any automatic stop if there
> will be
> > > a data loss. Andrey, what do you think?
> > >
> > > --Yakov
> > >
> >
> >
> > --
> > Best regards,
> >   Andrey Kuznetsov.
>
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Mmuzaf
I think we should find exact answers to these questions:
 1. What `critical` issue exactly is?
 2. How can we find critical issues?
 3. How can we handle critical issues?

First,
 - Ignore uninterruptable actions (e.g. worker\service shutdown)
 - Long I/O operations (should be a configurable timeout for each type of
usage)
 - Infinite loops
 - Stalled\deadlocked threads (and\or too many parked threads, exclude I/O)

Second,
 - The working queue is without progress (e.g. disco, exchange queues)
 - Work hasn't been completed since the last heartbeat (checking milestones)
 - Too many system resources used by a thread for the long period of time
(allocated memory, CPU)
 - Timing fields associated with each thread status exceeded a maximum time
limit.

Third (not too many options here),
 - `log everything` should be the default behaviour in all these cases,
since it may be difficult to find the cause after the restart.
 - Wait some interval of time and kill the hanging node (cluster should be
configured stable enough)

Questions,
 - Not sure, but can workers miss their heartbeat deadlines if CPU loads up
to 80%-90%? Bursts of momentary overloads can be
    expected behaviour as a normal part of system operations.
 - Why do we decide that critical thread should monitor each other? For
instance, if all the tasks were blocked and unable to run,
    node reset would never occur. As for me, a better solution is to use a
separate monitor thread or pool (maybe both with software
    and hardware checks) that not only checks heartbeats but monitors the
other system as well.

On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]> wrote:

> It would be safer to restart the entire cluster than to remove the last
> node for a cache that should be redundant.
>
> On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]> wrote:
>
> > Hi,
> >
> > I agree with Yakov that we can provide some option that manage worker
> > liveness checker behavior in case of observing that some worker is
> > blocked too long.
> > At least it will  some workaround for cases when node fails is too
> > annoying.
> >
> > Backups count threshold sounds good but I don't understand how it will
> > help in case of cluster hanging.
> >
> > The simplest solution here is alert in cases of blocking of some
> > critical worker (we can improve WorkersRegistry for this purpose and
> > expose list of blocked workers) and optionally call system configured
> > failure processor. BTW, failure processor can be extended in order to
> > perform any checks (e.g. backup count) and decide whether it should
> > stop node or not.
> > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <[hidden email]>
> wrote:
> > >
> > > David, Yakov, I understand your fears. But liveness checks deal with
> > > _critical_ conditions, i.e. when such a condition is met we conclude
> the
> > > node as totally broken, and there is no sense to keep it alive
> regardless
> > > the data it contains. If we want to give it a chance, then the
> condition
> > > (long fsync etc.) should not considered as critical at all.
> > >
> > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <[hidden email]>:
> > >
> > > > Agree with David. We need to have an opporunity set backups count
> > threshold
> > > > (at runtime also!) that will not allow any automatic stop if there
> > will be
> > > > a data loss. Andrey, what do you think?
> > > >
> > > > --Yakov
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > >   Andrey Kuznetsov.
> >
>
--
--
Maxim Muzafarov
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

David Harvey
When I've done this before,I've needed to find the oldest  thread, and kill
the node running that.   From a language standpoint, Maxim's "without
progress" better than "heartbeat".   For example, what I'm most interested
in on a distributed system is which thread started the work it has not
completed the earliest, and when did that thread last make forward
process.     You don't want to kill a node because a thread is waiting on a
lock held by a thread that went off-node and has not gotten a response.
If you don't understand the dependency relationships, you will make
incorrect recovery decisions.

On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]> wrote:

> I think we should find exact answers to these questions:
>  1. What `critical` issue exactly is?
>  2. How can we find critical issues?
>  3. How can we handle critical issues?
>
> First,
>  - Ignore uninterruptable actions (e.g. worker\service shutdown)
>  - Long I/O operations (should be a configurable timeout for each type of
> usage)
>  - Infinite loops
>  - Stalled\deadlocked threads (and\or too many parked threads, exclude I/O)
>
> Second,
>  - The working queue is without progress (e.g. disco, exchange queues)
>  - Work hasn't been completed since the last heartbeat (checking
> milestones)
>  - Too many system resources used by a thread for the long period of time
> (allocated memory, CPU)
>  - Timing fields associated with each thread status exceeded a maximum time
> limit.
>
> Third (not too many options here),
>  - `log everything` should be the default behaviour in all these cases,
> since it may be difficult to find the cause after the restart.
>  - Wait some interval of time and kill the hanging node (cluster should be
> configured stable enough)
>
> Questions,
>  - Not sure, but can workers miss their heartbeat deadlines if CPU loads up
> to 80%-90%? Bursts of momentary overloads can be
>     expected behaviour as a normal part of system operations.
>  - Why do we decide that critical thread should monitor each other? For
> instance, if all the tasks were blocked and unable to run,
>     node reset would never occur. As for me, a better solution is to use a
> separate monitor thread or pool (maybe both with software
>     and hardware checks) that not only checks heartbeats but monitors the
> other system as well.
>
> On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]> wrote:
>
> > It would be safer to restart the entire cluster than to remove the last
> > node for a cache that should be redundant.
> >
> > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]> wrote:
> >
> > > Hi,
> > >
> > > I agree with Yakov that we can provide some option that manage worker
> > > liveness checker behavior in case of observing that some worker is
> > > blocked too long.
> > > At least it will  some workaround for cases when node fails is too
> > > annoying.
> > >
> > > Backups count threshold sounds good but I don't understand how it will
> > > help in case of cluster hanging.
> > >
> > > The simplest solution here is alert in cases of blocking of some
> > > critical worker (we can improve WorkersRegistry for this purpose and
> > > expose list of blocked workers) and optionally call system configured
> > > failure processor. BTW, failure processor can be extended in order to
> > > perform any checks (e.g. backup count) and decide whether it should
> > > stop node or not.
> > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <[hidden email]>
> > wrote:
> > > >
> > > > David, Yakov, I understand your fears. But liveness checks deal with
> > > > _critical_ conditions, i.e. when such a condition is met we conclude
> > the
> > > > node as totally broken, and there is no sense to keep it alive
> > regardless
> > > > the data it contains. If we want to give it a chance, then the
> > condition
> > > > (long fsync etc.) should not considered as critical at all.
> > > >
> > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <[hidden email]>:
> > > >
> > > > > Agree with David. We need to have an opporunity set backups count
> > > threshold
> > > > > (at runtime also!) that will not allow any automatic stop if there
> > > will be
> > > > > a data loss. Andrey, what do you think?
> > > > >
> > > > > --Yakov
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > >   Andrey Kuznetsov.
> > >
> >
> --
> --
> Maxim Muzafarov
>
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

vgrigorev
In reply to this post by yzhdanov
Reliability of ignite is very important to me, so please consider following
idea:

- Important threads as WAL writer (as a sample of any critical thread)
must not do any blocking action, by this way:
   - WAL thread  must be management thread for all WAL operations
   - Child, worker thread of WAL writer must do separate operations which
implements concrete WAL writings
   - Operations are separate units of work, countable by it's heartbeat for
sample and has characteristics    
       and ids.
   - Operations written in queue and has state.
   - If hung occur in a concrete operation, this operation may be cancelled,
(all child operations in a cluster too) and all others operations continue
to work, with failed operation go to recovery state or report user about
fail
   - If WAL child thread do infinite blocking operation, it's need to kill
this working thread and start new with same queue of operations of WAL type

So, we become able :
- always know what concrete operation  are in hung, (not that whole main WAL
thread hung) so can better decide want to do.
- WAL thread operations newer irresponsive, at minimum it reports that it
long doing some operation and just can insert next operation queue or
propose fail
- report size of queue and else full detail information about what happening
and allow to decide precisely - fail concrete user operations, clean
resources, spawn new working thread or else, and continue to work without
painful node or cluster restart
- minimal cleanless possible (just some operations)
- balance operations with queues, also implementing backpressure, so make
sure that optimal performance load is kept and cluster will not go to
degradation from some local oversaturations
- newer see that node hung, but just degrade and being in fully controlled
state

- WAL thread operations check management functions can be encapsulated to
special class with that functionality and called from else main threads as
now.

Sorry for any inconvenience, I'm new to writing here



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Andrey Kuznetsov
In reply to this post by David Harvey
David, Maxim!

Thanks a lot for you ideas. Unfortunately, I can't adopt all of them right
now: the scope is much broader than the scope of the change I implement. I
have had a talk to a group of Ignite commiters, and we agreed to complete
the change as follows.
- Blocking instructions in system-critical which may resonably last long
should be explicitly excluded from the monitoring.
- Failure handlers should have a setting to suppress some failures on
per-failure-type basis.
According to this I have updated the implementation: [1]

[1] https://github.com/apache/ignite/pull/4089

пн, 10 сент. 2018 г. в 22:35, David Harvey <[hidden email]>:

> When I've done this before,I've needed to find the oldest  thread, and kill
> the node running that.   From a language standpoint, Maxim's "without
> progress" better than "heartbeat".   For example, what I'm most interested
> in on a distributed system is which thread started the work it has not
> completed the earliest, and when did that thread last make forward
> process.     You don't want to kill a node because a thread is waiting on a
> lock held by a thread that went off-node and has not gotten a response.
> If you don't understand the dependency relationships, you will make
> incorrect recovery decisions.
>
> On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]>
> wrote:
>
> > I think we should find exact answers to these questions:
> >  1. What `critical` issue exactly is?
> >  2. How can we find critical issues?
> >  3. How can we handle critical issues?
> >
> > First,
> >  - Ignore uninterruptable actions (e.g. worker\service shutdown)
> >  - Long I/O operations (should be a configurable timeout for each type of
> > usage)
> >  - Infinite loops
> >  - Stalled\deadlocked threads (and\or too many parked threads, exclude
> I/O)
> >
> > Second,
> >  - The working queue is without progress (e.g. disco, exchange queues)
> >  - Work hasn't been completed since the last heartbeat (checking
> > milestones)
> >  - Too many system resources used by a thread for the long period of time
> > (allocated memory, CPU)
> >  - Timing fields associated with each thread status exceeded a maximum
> time
> > limit.
> >
> > Third (not too many options here),
> >  - `log everything` should be the default behaviour in all these cases,
> > since it may be difficult to find the cause after the restart.
> >  - Wait some interval of time and kill the hanging node (cluster should
> be
> > configured stable enough)
> >
> > Questions,
> >  - Not sure, but can workers miss their heartbeat deadlines if CPU loads
> up
> > to 80%-90%? Bursts of momentary overloads can be
> >     expected behaviour as a normal part of system operations.
> >  - Why do we decide that critical thread should monitor each other? For
> > instance, if all the tasks were blocked and unable to run,
> >     node reset would never occur. As for me, a better solution is to use
> a
> > separate monitor thread or pool (maybe both with software
> >     and hardware checks) that not only checks heartbeats but monitors the
> > other system as well.
> >
> > On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]> wrote:
> >
> > > It would be safer to restart the entire cluster than to remove the last
> > > node for a cache that should be redundant.
> > >
> > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]> wrote:
> > >
> > > > Hi,
> > > >
> > > > I agree with Yakov that we can provide some option that manage worker
> > > > liveness checker behavior in case of observing that some worker is
> > > > blocked too long.
> > > > At least it will  some workaround for cases when node fails is too
> > > > annoying.
> > > >
> > > > Backups count threshold sounds good but I don't understand how it
> will
> > > > help in case of cluster hanging.
> > > >
> > > > The simplest solution here is alert in cases of blocking of some
> > > > critical worker (we can improve WorkersRegistry for this purpose and
> > > > expose list of blocked workers) and optionally call system configured
> > > > failure processor. BTW, failure processor can be extended in order to
> > > > perform any checks (e.g. backup count) and decide whether it should
> > > > stop node or not.
> > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <[hidden email]>
> > > wrote:
> > > > >
> > > > > David, Yakov, I understand your fears. But liveness checks deal
> with
> > > > > _critical_ conditions, i.e. when such a condition is met we
> conclude
> > > the
> > > > > node as totally broken, and there is no sense to keep it alive
> > > regardless
> > > > > the data it contains. If we want to give it a chance, then the
> > > condition
> > > > > (long fsync etc.) should not considered as critical at all.
> > > > >
> > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <[hidden email]>:
> > > > >
> > > > > > Agree with David. We need to have an opporunity set backups count
> > > > threshold
> > > > > > (at runtime also!) that will not allow any automatic stop if
> there
> > > > will be
> > > > > > a data loss. Andrey, what do you think?
> > > > > >
> > > > > > --Yakov
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards,
> > > > >   Andrey Kuznetsov.
> > > >
> > >
> > --
> > --
> > Maxim Muzafarov
> >
>


--
Best regards,
  Andrey Kuznetsov.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Andrey Gura-2
Andrey,

finally your change is merged to master branch. Congratulations and
thank you very much! :)

I think that the next step is feature that will allow signal about
blocked threads to the monitoring tools via MXBean.

I hope you will continue development of this feature and provide your
vision in new JIRA issue.


On Tue, Sep 11, 2018 at 6:54 PM Andrey Kuznetsov <[hidden email]> wrote:

>
> David, Maxim!
>
> Thanks a lot for you ideas. Unfortunately, I can't adopt all of them right
> now: the scope is much broader than the scope of the change I implement. I
> have had a talk to a group of Ignite commiters, and we agreed to complete
> the change as follows.
> - Blocking instructions in system-critical which may resonably last long
> should be explicitly excluded from the monitoring.
> - Failure handlers should have a setting to suppress some failures on
> per-failure-type basis.
> According to this I have updated the implementation: [1]
>
> [1] https://github.com/apache/ignite/pull/4089
>
> пн, 10 сент. 2018 г. в 22:35, David Harvey <[hidden email]>:
>
> > When I've done this before,I've needed to find the oldest  thread, and kill
> > the node running that.   From a language standpoint, Maxim's "without
> > progress" better than "heartbeat".   For example, what I'm most interested
> > in on a distributed system is which thread started the work it has not
> > completed the earliest, and when did that thread last make forward
> > process.     You don't want to kill a node because a thread is waiting on a
> > lock held by a thread that went off-node and has not gotten a response.
> > If you don't understand the dependency relationships, you will make
> > incorrect recovery decisions.
> >
> > On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]>
> > wrote:
> >
> > > I think we should find exact answers to these questions:
> > >  1. What `critical` issue exactly is?
> > >  2. How can we find critical issues?
> > >  3. How can we handle critical issues?
> > >
> > > First,
> > >  - Ignore uninterruptable actions (e.g. worker\service shutdown)
> > >  - Long I/O operations (should be a configurable timeout for each type of
> > > usage)
> > >  - Infinite loops
> > >  - Stalled\deadlocked threads (and\or too many parked threads, exclude
> > I/O)
> > >
> > > Second,
> > >  - The working queue is without progress (e.g. disco, exchange queues)
> > >  - Work hasn't been completed since the last heartbeat (checking
> > > milestones)
> > >  - Too many system resources used by a thread for the long period of time
> > > (allocated memory, CPU)
> > >  - Timing fields associated with each thread status exceeded a maximum
> > time
> > > limit.
> > >
> > > Third (not too many options here),
> > >  - `log everything` should be the default behaviour in all these cases,
> > > since it may be difficult to find the cause after the restart.
> > >  - Wait some interval of time and kill the hanging node (cluster should
> > be
> > > configured stable enough)
> > >
> > > Questions,
> > >  - Not sure, but can workers miss their heartbeat deadlines if CPU loads
> > up
> > > to 80%-90%? Bursts of momentary overloads can be
> > >     expected behaviour as a normal part of system operations.
> > >  - Why do we decide that critical thread should monitor each other? For
> > > instance, if all the tasks were blocked and unable to run,
> > >     node reset would never occur. As for me, a better solution is to use
> > a
> > > separate monitor thread or pool (maybe both with software
> > >     and hardware checks) that not only checks heartbeats but monitors the
> > > other system as well.
> > >
> > > On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]> wrote:
> > >
> > > > It would be safer to restart the entire cluster than to remove the last
> > > > node for a cache that should be redundant.
> > > >
> > > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I agree with Yakov that we can provide some option that manage worker
> > > > > liveness checker behavior in case of observing that some worker is
> > > > > blocked too long.
> > > > > At least it will  some workaround for cases when node fails is too
> > > > > annoying.
> > > > >
> > > > > Backups count threshold sounds good but I don't understand how it
> > will
> > > > > help in case of cluster hanging.
> > > > >
> > > > > The simplest solution here is alert in cases of blocking of some
> > > > > critical worker (we can improve WorkersRegistry for this purpose and
> > > > > expose list of blocked workers) and optionally call system configured
> > > > > failure processor. BTW, failure processor can be extended in order to
> > > > > perform any checks (e.g. backup count) and decide whether it should
> > > > > stop node or not.
> > > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <[hidden email]>
> > > > wrote:
> > > > > >
> > > > > > David, Yakov, I understand your fears. But liveness checks deal
> > with
> > > > > > _critical_ conditions, i.e. when such a condition is met we
> > conclude
> > > > the
> > > > > > node as totally broken, and there is no sense to keep it alive
> > > > regardless
> > > > > > the data it contains. If we want to give it a chance, then the
> > > > condition
> > > > > > (long fsync etc.) should not considered as critical at all.
> > > > > >
> > > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <[hidden email]>:
> > > > > >
> > > > > > > Agree with David. We need to have an opporunity set backups count
> > > > > threshold
> > > > > > > (at runtime also!) that will not allow any automatic stop if
> > there
> > > > > will be
> > > > > > > a data loss. Andrey, what do you think?
> > > > > > >
> > > > > > > --Yakov
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Best regards,
> > > > > >   Andrey Kuznetsov.
> > > > >
> > > >
> > > --
> > > --
> > > Maxim Muzafarov
> > >
> >
>
>
> --
> Best regards,
>   Andrey Kuznetsov.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Denis Magda-2
Andrey K. and G.,

Thanks, do we have a documentation ticket created? Prachi (copied) can help
with the documentation.

--
Denis

On Mon, Sep 24, 2018 at 5:51 AM Andrey Gura <[hidden email]> wrote:

> Andrey,
>
> finally your change is merged to master branch. Congratulations and
> thank you very much! :)
>
> I think that the next step is feature that will allow signal about
> blocked threads to the monitoring tools via MXBean.
>
> I hope you will continue development of this feature and provide your
> vision in new JIRA issue.
>
>
> On Tue, Sep 11, 2018 at 6:54 PM Andrey Kuznetsov <[hidden email]>
> wrote:
> >
> > David, Maxim!
> >
> > Thanks a lot for you ideas. Unfortunately, I can't adopt all of them
> right
> > now: the scope is much broader than the scope of the change I implement.
> I
> > have had a talk to a group of Ignite commiters, and we agreed to complete
> > the change as follows.
> > - Blocking instructions in system-critical which may resonably last long
> > should be explicitly excluded from the monitoring.
> > - Failure handlers should have a setting to suppress some failures on
> > per-failure-type basis.
> > According to this I have updated the implementation: [1]
> >
> > [1] https://github.com/apache/ignite/pull/4089
> >
> > пн, 10 сент. 2018 г. в 22:35, David Harvey <[hidden email]>:
> >
> > > When I've done this before,I've needed to find the oldest  thread, and
> kill
> > > the node running that.   From a language standpoint, Maxim's "without
> > > progress" better than "heartbeat".   For example, what I'm most
> interested
> > > in on a distributed system is which thread started the work it has not
> > > completed the earliest, and when did that thread last make forward
> > > process.     You don't want to kill a node because a thread is waiting
> on a
> > > lock held by a thread that went off-node and has not gotten a response.
> > > If you don't understand the dependency relationships, you will make
> > > incorrect recovery decisions.
> > >
> > > On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]>
> > > wrote:
> > >
> > > > I think we should find exact answers to these questions:
> > > >  1. What `critical` issue exactly is?
> > > >  2. How can we find critical issues?
> > > >  3. How can we handle critical issues?
> > > >
> > > > First,
> > > >  - Ignore uninterruptable actions (e.g. worker\service shutdown)
> > > >  - Long I/O operations (should be a configurable timeout for each
> type of
> > > > usage)
> > > >  - Infinite loops
> > > >  - Stalled\deadlocked threads (and\or too many parked threads,
> exclude
> > > I/O)
> > > >
> > > > Second,
> > > >  - The working queue is without progress (e.g. disco, exchange
> queues)
> > > >  - Work hasn't been completed since the last heartbeat (checking
> > > > milestones)
> > > >  - Too many system resources used by a thread for the long period of
> time
> > > > (allocated memory, CPU)
> > > >  - Timing fields associated with each thread status exceeded a
> maximum
> > > time
> > > > limit.
> > > >
> > > > Third (not too many options here),
> > > >  - `log everything` should be the default behaviour in all these
> cases,
> > > > since it may be difficult to find the cause after the restart.
> > > >  - Wait some interval of time and kill the hanging node (cluster
> should
> > > be
> > > > configured stable enough)
> > > >
> > > > Questions,
> > > >  - Not sure, but can workers miss their heartbeat deadlines if CPU
> loads
> > > up
> > > > to 80%-90%? Bursts of momentary overloads can be
> > > >     expected behaviour as a normal part of system operations.
> > > >  - Why do we decide that critical thread should monitor each other?
> For
> > > > instance, if all the tasks were blocked and unable to run,
> > > >     node reset would never occur. As for me, a better solution is to
> use
> > > a
> > > > separate monitor thread or pool (maybe both with software
> > > >     and hardware checks) that not only checks heartbeats but
> monitors the
> > > > other system as well.
> > > >
> > > > On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]>
> wrote:
> > > >
> > > > > It would be safer to restart the entire cluster than to remove the
> last
> > > > > node for a cache that should be redundant.
> > > > >
> > > > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]> wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I agree with Yakov that we can provide some option that manage
> worker
> > > > > > liveness checker behavior in case of observing that some worker
> is
> > > > > > blocked too long.
> > > > > > At least it will  some workaround for cases when node fails is
> too
> > > > > > annoying.
> > > > > >
> > > > > > Backups count threshold sounds good but I don't understand how it
> > > will
> > > > > > help in case of cluster hanging.
> > > > > >
> > > > > > The simplest solution here is alert in cases of blocking of some
> > > > > > critical worker (we can improve WorkersRegistry for this purpose
> and
> > > > > > expose list of blocked workers) and optionally call system
> configured
> > > > > > failure processor. BTW, failure processor can be extended in
> order to
> > > > > > perform any checks (e.g. backup count) and decide whether it
> should
> > > > > > stop node or not.
> > > > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <
> [hidden email]>
> > > > > wrote:
> > > > > > >
> > > > > > > David, Yakov, I understand your fears. But liveness checks deal
> > > with
> > > > > > > _critical_ conditions, i.e. when such a condition is met we
> > > conclude
> > > > > the
> > > > > > > node as totally broken, and there is no sense to keep it alive
> > > > > regardless
> > > > > > > the data it contains. If we want to give it a chance, then the
> > > > > condition
> > > > > > > (long fsync etc.) should not considered as critical at all.
> > > > > > >
> > > > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <
> [hidden email]>:
> > > > > > >
> > > > > > > > Agree with David. We need to have an opporunity set backups
> count
> > > > > > threshold
> > > > > > > > (at runtime also!) that will not allow any automatic stop if
> > > there
> > > > > > will be
> > > > > > > > a data loss. Andrey, what do you think?
> > > > > > > >
> > > > > > > > --Yakov
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Best regards,
> > > > > > >   Andrey Kuznetsov.
> > > > > >
> > > > >
> > > > --
> > > > --
> > > > Maxim Muzafarov
> > > >
> > >
> >
> >
> > --
> > Best regards,
> >   Andrey Kuznetsov.
>
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Andrey Kuznetsov
Denis,

I've created the ticket [1] with short description of the functionality.

[1] https://issues.apache.org/jira/browse/IGNITE-9679


пн, 24 сент. 2018 г. в 17:46, Denis Magda <[hidden email]>:

> Andrey K. and G.,
>
> Thanks, do we have a documentation ticket created? Prachi (copied) can help
> with the documentation.
>
> --
> Denis
>
> On Mon, Sep 24, 2018 at 5:51 AM Andrey Gura <[hidden email]> wrote:
>
> > Andrey,
> >
> > finally your change is merged to master branch. Congratulations and
> > thank you very much! :)
> >
> > I think that the next step is feature that will allow signal about
> > blocked threads to the monitoring tools via MXBean.
> >
> > I hope you will continue development of this feature and provide your
> > vision in new JIRA issue.
> >
> >
> > On Tue, Sep 11, 2018 at 6:54 PM Andrey Kuznetsov <[hidden email]>
> > wrote:
> > >
> > > David, Maxim!
> > >
> > > Thanks a lot for you ideas. Unfortunately, I can't adopt all of them
> > right
> > > now: the scope is much broader than the scope of the change I
> implement.
> > I
> > > have had a talk to a group of Ignite commiters, and we agreed to
> complete
> > > the change as follows.
> > > - Blocking instructions in system-critical which may resonably last
> long
> > > should be explicitly excluded from the monitoring.
> > > - Failure handlers should have a setting to suppress some failures on
> > > per-failure-type basis.
> > > According to this I have updated the implementation: [1]
> > >
> > > [1] https://github.com/apache/ignite/pull/4089
> > >
> > > пн, 10 сент. 2018 г. в 22:35, David Harvey <[hidden email]>:
> > >
> > > > When I've done this before,I've needed to find the oldest  thread,
> and
> > kill
> > > > the node running that.   From a language standpoint, Maxim's "without
> > > > progress" better than "heartbeat".   For example, what I'm most
> > interested
> > > > in on a distributed system is which thread started the work it has
> not
> > > > completed the earliest, and when did that thread last make forward
> > > > process.     You don't want to kill a node because a thread is
> waiting
> > on a
> > > > lock held by a thread that went off-node and has not gotten a
> response.
> > > > If you don't understand the dependency relationships, you will make
> > > > incorrect recovery decisions.
> > > >
> > > > On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]>
> > > > wrote:
> > > >
> > > > > I think we should find exact answers to these questions:
> > > > >  1. What `critical` issue exactly is?
> > > > >  2. How can we find critical issues?
> > > > >  3. How can we handle critical issues?
> > > > >
> > > > > First,
> > > > >  - Ignore uninterruptable actions (e.g. worker\service shutdown)
> > > > >  - Long I/O operations (should be a configurable timeout for each
> > type of
> > > > > usage)
> > > > >  - Infinite loops
> > > > >  - Stalled\deadlocked threads (and\or too many parked threads,
> > exclude
> > > > I/O)
> > > > >
> > > > > Second,
> > > > >  - The working queue is without progress (e.g. disco, exchange
> > queues)
> > > > >  - Work hasn't been completed since the last heartbeat (checking
> > > > > milestones)
> > > > >  - Too many system resources used by a thread for the long period
> of
> > time
> > > > > (allocated memory, CPU)
> > > > >  - Timing fields associated with each thread status exceeded a
> > maximum
> > > > time
> > > > > limit.
> > > > >
> > > > > Third (not too many options here),
> > > > >  - `log everything` should be the default behaviour in all these
> > cases,
> > > > > since it may be difficult to find the cause after the restart.
> > > > >  - Wait some interval of time and kill the hanging node (cluster
> > should
> > > > be
> > > > > configured stable enough)
> > > > >
> > > > > Questions,
> > > > >  - Not sure, but can workers miss their heartbeat deadlines if CPU
> > loads
> > > > up
> > > > > to 80%-90%? Bursts of momentary overloads can be
> > > > >     expected behaviour as a normal part of system operations.
> > > > >  - Why do we decide that critical thread should monitor each other?
> > For
> > > > > instance, if all the tasks were blocked and unable to run,
> > > > >     node reset would never occur. As for me, a better solution is
> to
> > use
> > > > a
> > > > > separate monitor thread or pool (maybe both with software
> > > > >     and hardware checks) that not only checks heartbeats but
> > monitors the
> > > > > other system as well.
> > > > >
> > > > > On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]>
> > wrote:
> > > > >
> > > > > > It would be safer to restart the entire cluster than to remove
> the
> > last
> > > > > > node for a cache that should be redundant.
> > > > > >
> > > > > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]>
> wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > I agree with Yakov that we can provide some option that manage
> > worker
> > > > > > > liveness checker behavior in case of observing that some worker
> > is
> > > > > > > blocked too long.
> > > > > > > At least it will  some workaround for cases when node fails is
> > too
> > > > > > > annoying.
> > > > > > >
> > > > > > > Backups count threshold sounds good but I don't understand how
> it
> > > > will
> > > > > > > help in case of cluster hanging.
> > > > > > >
> > > > > > > The simplest solution here is alert in cases of blocking of
> some
> > > > > > > critical worker (we can improve WorkersRegistry for this
> purpose
> > and
> > > > > > > expose list of blocked workers) and optionally call system
> > configured
> > > > > > > failure processor. BTW, failure processor can be extended in
> > order to
> > > > > > > perform any checks (e.g. backup count) and decide whether it
> > should
> > > > > > > stop node or not.
> > > > > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <
> > [hidden email]>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > David, Yakov, I understand your fears. But liveness checks
> deal
> > > > with
> > > > > > > > _critical_ conditions, i.e. when such a condition is met we
> > > > conclude
> > > > > > the
> > > > > > > > node as totally broken, and there is no sense to keep it
> alive
> > > > > > regardless
> > > > > > > > the data it contains. If we want to give it a chance, then
> the
> > > > > > condition
> > > > > > > > (long fsync etc.) should not considered as critical at all.
> > > > > > > >
> > > > > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <
> > [hidden email]>:
> > > > > > > >
> > > > > > > > > Agree with David. We need to have an opporunity set backups
> > count
> > > > > > > threshold
> > > > > > > > > (at runtime also!) that will not allow any automatic stop
> if
> > > > there
> > > > > > > will be
> > > > > > > > > a data loss. Andrey, what do you think?
> > > > > > > > >
> > > > > > > > > --Yakov
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Best regards,
> > > > > > > >   Andrey Kuznetsov.
> > > > > > >
> > > > > >
> > > > > --
> > > > > --
> > > > > Maxim Muzafarov
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > >   Andrey Kuznetsov.
> >
>


--
Best regards,
  Andrey Kuznetsov.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

daradurvs
Hi Igniters!

Thank you for this important improvement!

I've looked through implementation and noticed that
GridDhtPartitionsExchangeFuture#init has not been wrapped in blocked
section. This means it easy to halt the node in case of longrunning
actions during PME, for example when we create a cache with
StoreFactrory which connect to 3rd party DB.

I'm not sure that it is the right behavior.

I filled the issue [1] and prepared the PR [2] with reproducer and possible fix.

Andrey, could you please look at and confirm that it makes sense?

[1] https://issues.apache.org/jira/browse/IGNITE-9710
[2] https://github.com/apache/ignite/pull/4845
On Mon, Sep 24, 2018 at 9:46 PM Andrey Kuznetsov <[hidden email]> wrote:

>
> Denis,
>
> I've created the ticket [1] with short description of the functionality.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-9679
>
>
> пн, 24 сент. 2018 г. в 17:46, Denis Magda <[hidden email]>:
>
> > Andrey K. and G.,
> >
> > Thanks, do we have a documentation ticket created? Prachi (copied) can help
> > with the documentation.
> >
> > --
> > Denis
> >
> > On Mon, Sep 24, 2018 at 5:51 AM Andrey Gura <[hidden email]> wrote:
> >
> > > Andrey,
> > >
> > > finally your change is merged to master branch. Congratulations and
> > > thank you very much! :)
> > >
> > > I think that the next step is feature that will allow signal about
> > > blocked threads to the monitoring tools via MXBean.
> > >
> > > I hope you will continue development of this feature and provide your
> > > vision in new JIRA issue.
> > >
> > >
> > > On Tue, Sep 11, 2018 at 6:54 PM Andrey Kuznetsov <[hidden email]>
> > > wrote:
> > > >
> > > > David, Maxim!
> > > >
> > > > Thanks a lot for you ideas. Unfortunately, I can't adopt all of them
> > > right
> > > > now: the scope is much broader than the scope of the change I
> > implement.
> > > I
> > > > have had a talk to a group of Ignite commiters, and we agreed to
> > complete
> > > > the change as follows.
> > > > - Blocking instructions in system-critical which may resonably last
> > long
> > > > should be explicitly excluded from the monitoring.
> > > > - Failure handlers should have a setting to suppress some failures on
> > > > per-failure-type basis.
> > > > According to this I have updated the implementation: [1]
> > > >
> > > > [1] https://github.com/apache/ignite/pull/4089
> > > >
> > > > пн, 10 сент. 2018 г. в 22:35, David Harvey <[hidden email]>:
> > > >
> > > > > When I've done this before,I've needed to find the oldest  thread,
> > and
> > > kill
> > > > > the node running that.   From a language standpoint, Maxim's "without
> > > > > progress" better than "heartbeat".   For example, what I'm most
> > > interested
> > > > > in on a distributed system is which thread started the work it has
> > not
> > > > > completed the earliest, and when did that thread last make forward
> > > > > process.     You don't want to kill a node because a thread is
> > waiting
> > > on a
> > > > > lock held by a thread that went off-node and has not gotten a
> > response.
> > > > > If you don't understand the dependency relationships, you will make
> > > > > incorrect recovery decisions.
> > > > >
> > > > > On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]>
> > > > > wrote:
> > > > >
> > > > > > I think we should find exact answers to these questions:
> > > > > >  1. What `critical` issue exactly is?
> > > > > >  2. How can we find critical issues?
> > > > > >  3. How can we handle critical issues?
> > > > > >
> > > > > > First,
> > > > > >  - Ignore uninterruptable actions (e.g. worker\service shutdown)
> > > > > >  - Long I/O operations (should be a configurable timeout for each
> > > type of
> > > > > > usage)
> > > > > >  - Infinite loops
> > > > > >  - Stalled\deadlocked threads (and\or too many parked threads,
> > > exclude
> > > > > I/O)
> > > > > >
> > > > > > Second,
> > > > > >  - The working queue is without progress (e.g. disco, exchange
> > > queues)
> > > > > >  - Work hasn't been completed since the last heartbeat (checking
> > > > > > milestones)
> > > > > >  - Too many system resources used by a thread for the long period
> > of
> > > time
> > > > > > (allocated memory, CPU)
> > > > > >  - Timing fields associated with each thread status exceeded a
> > > maximum
> > > > > time
> > > > > > limit.
> > > > > >
> > > > > > Third (not too many options here),
> > > > > >  - `log everything` should be the default behaviour in all these
> > > cases,
> > > > > > since it may be difficult to find the cause after the restart.
> > > > > >  - Wait some interval of time and kill the hanging node (cluster
> > > should
> > > > > be
> > > > > > configured stable enough)
> > > > > >
> > > > > > Questions,
> > > > > >  - Not sure, but can workers miss their heartbeat deadlines if CPU
> > > loads
> > > > > up
> > > > > > to 80%-90%? Bursts of momentary overloads can be
> > > > > >     expected behaviour as a normal part of system operations.
> > > > > >  - Why do we decide that critical thread should monitor each other?
> > > For
> > > > > > instance, if all the tasks were blocked and unable to run,
> > > > > >     node reset would never occur. As for me, a better solution is
> > to
> > > use
> > > > > a
> > > > > > separate monitor thread or pool (maybe both with software
> > > > > >     and hardware checks) that not only checks heartbeats but
> > > monitors the
> > > > > > other system as well.
> > > > > >
> > > > > > On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]>
> > > wrote:
> > > > > >
> > > > > > > It would be safer to restart the entire cluster than to remove
> > the
> > > last
> > > > > > > node for a cache that should be redundant.
> > > > > > >
> > > > > > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]>
> > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I agree with Yakov that we can provide some option that manage
> > > worker
> > > > > > > > liveness checker behavior in case of observing that some worker
> > > is
> > > > > > > > blocked too long.
> > > > > > > > At least it will  some workaround for cases when node fails is
> > > too
> > > > > > > > annoying.
> > > > > > > >
> > > > > > > > Backups count threshold sounds good but I don't understand how
> > it
> > > > > will
> > > > > > > > help in case of cluster hanging.
> > > > > > > >
> > > > > > > > The simplest solution here is alert in cases of blocking of
> > some
> > > > > > > > critical worker (we can improve WorkersRegistry for this
> > purpose
> > > and
> > > > > > > > expose list of blocked workers) and optionally call system
> > > configured
> > > > > > > > failure processor. BTW, failure processor can be extended in
> > > order to
> > > > > > > > perform any checks (e.g. backup count) and decide whether it
> > > should
> > > > > > > > stop node or not.
> > > > > > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <
> > > [hidden email]>
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > David, Yakov, I understand your fears. But liveness checks
> > deal
> > > > > with
> > > > > > > > > _critical_ conditions, i.e. when such a condition is met we
> > > > > conclude
> > > > > > > the
> > > > > > > > > node as totally broken, and there is no sense to keep it
> > alive
> > > > > > > regardless
> > > > > > > > > the data it contains. If we want to give it a chance, then
> > the
> > > > > > > condition
> > > > > > > > > (long fsync etc.) should not considered as critical at all.
> > > > > > > > >
> > > > > > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <
> > > [hidden email]>:
> > > > > > > > >
> > > > > > > > > > Agree with David. We need to have an opporunity set backups
> > > count
> > > > > > > > threshold
> > > > > > > > > > (at runtime also!) that will not allow any automatic stop
> > if
> > > > > there
> > > > > > > > will be
> > > > > > > > > > a data loss. Andrey, what do you think?
> > > > > > > > > >
> > > > > > > > > > --Yakov
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Best regards,
> > > > > > > > >   Andrey Kuznetsov.
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > > --
> > > > > > Maxim Muzafarov
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > >   Andrey Kuznetsov.
> > >
> >
>
>
> --
> Best regards,
>   Andrey Kuznetsov.



--
Best Regards, Vyacheslav D.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Andrey Gura-2
Vyacheslav,

Exchange worker is strongly tied with
GridDhtPartitionExchangeFuture#init and it is ok. Exchange worker also
shouldn't be blocked for long time but in reality it happens.It also
means that your change doesn't make sense.

What actually make sense it is identification of places which
intentionally blocking. May be some places/actions should be braced by
blocking guards.

If you have failing tests please make sure that your failureHandler is
NoOpFailureHandler or any other handler with ignoreFailureTypes =
[CRITICAL_WORKER_BLOCKED].


On Wed, Sep 26, 2018 at 9:43 PM Vyacheslav Daradur <[hidden email]> wrote:

>
> Hi Igniters!
>
> Thank you for this important improvement!
>
> I've looked through implementation and noticed that
> GridDhtPartitionsExchangeFuture#init has not been wrapped in blocked
> section. This means it easy to halt the node in case of longrunning
> actions during PME, for example when we create a cache with
> StoreFactrory which connect to 3rd party DB.
>
> I'm not sure that it is the right behavior.
>
> I filled the issue [1] and prepared the PR [2] with reproducer and possible fix.
>
> Andrey, could you please look at and confirm that it makes sense?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-9710
> [2] https://github.com/apache/ignite/pull/4845
> On Mon, Sep 24, 2018 at 9:46 PM Andrey Kuznetsov <[hidden email]> wrote:
> >
> > Denis,
> >
> > I've created the ticket [1] with short description of the functionality.
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-9679
> >
> >
> > пн, 24 сент. 2018 г. в 17:46, Denis Magda <[hidden email]>:
> >
> > > Andrey K. and G.,
> > >
> > > Thanks, do we have a documentation ticket created? Prachi (copied) can help
> > > with the documentation.
> > >
> > > --
> > > Denis
> > >
> > > On Mon, Sep 24, 2018 at 5:51 AM Andrey Gura <[hidden email]> wrote:
> > >
> > > > Andrey,
> > > >
> > > > finally your change is merged to master branch. Congratulations and
> > > > thank you very much! :)
> > > >
> > > > I think that the next step is feature that will allow signal about
> > > > blocked threads to the monitoring tools via MXBean.
> > > >
> > > > I hope you will continue development of this feature and provide your
> > > > vision in new JIRA issue.
> > > >
> > > >
> > > > On Tue, Sep 11, 2018 at 6:54 PM Andrey Kuznetsov <[hidden email]>
> > > > wrote:
> > > > >
> > > > > David, Maxim!
> > > > >
> > > > > Thanks a lot for you ideas. Unfortunately, I can't adopt all of them
> > > > right
> > > > > now: the scope is much broader than the scope of the change I
> > > implement.
> > > > I
> > > > > have had a talk to a group of Ignite commiters, and we agreed to
> > > complete
> > > > > the change as follows.
> > > > > - Blocking instructions in system-critical which may resonably last
> > > long
> > > > > should be explicitly excluded from the monitoring.
> > > > > - Failure handlers should have a setting to suppress some failures on
> > > > > per-failure-type basis.
> > > > > According to this I have updated the implementation: [1]
> > > > >
> > > > > [1] https://github.com/apache/ignite/pull/4089
> > > > >
> > > > > пн, 10 сент. 2018 г. в 22:35, David Harvey <[hidden email]>:
> > > > >
> > > > > > When I've done this before,I've needed to find the oldest  thread,
> > > and
> > > > kill
> > > > > > the node running that.   From a language standpoint, Maxim's "without
> > > > > > progress" better than "heartbeat".   For example, what I'm most
> > > > interested
> > > > > > in on a distributed system is which thread started the work it has
> > > not
> > > > > > completed the earliest, and when did that thread last make forward
> > > > > > process.     You don't want to kill a node because a thread is
> > > waiting
> > > > on a
> > > > > > lock held by a thread that went off-node and has not gotten a
> > > response.
> > > > > > If you don't understand the dependency relationships, you will make
> > > > > > incorrect recovery decisions.
> > > > > >
> > > > > > On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]>
> > > > > > wrote:
> > > > > >
> > > > > > > I think we should find exact answers to these questions:
> > > > > > >  1. What `critical` issue exactly is?
> > > > > > >  2. How can we find critical issues?
> > > > > > >  3. How can we handle critical issues?
> > > > > > >
> > > > > > > First,
> > > > > > >  - Ignore uninterruptable actions (e.g. worker\service shutdown)
> > > > > > >  - Long I/O operations (should be a configurable timeout for each
> > > > type of
> > > > > > > usage)
> > > > > > >  - Infinite loops
> > > > > > >  - Stalled\deadlocked threads (and\or too many parked threads,
> > > > exclude
> > > > > > I/O)
> > > > > > >
> > > > > > > Second,
> > > > > > >  - The working queue is without progress (e.g. disco, exchange
> > > > queues)
> > > > > > >  - Work hasn't been completed since the last heartbeat (checking
> > > > > > > milestones)
> > > > > > >  - Too many system resources used by a thread for the long period
> > > of
> > > > time
> > > > > > > (allocated memory, CPU)
> > > > > > >  - Timing fields associated with each thread status exceeded a
> > > > maximum
> > > > > > time
> > > > > > > limit.
> > > > > > >
> > > > > > > Third (not too many options here),
> > > > > > >  - `log everything` should be the default behaviour in all these
> > > > cases,
> > > > > > > since it may be difficult to find the cause after the restart.
> > > > > > >  - Wait some interval of time and kill the hanging node (cluster
> > > > should
> > > > > > be
> > > > > > > configured stable enough)
> > > > > > >
> > > > > > > Questions,
> > > > > > >  - Not sure, but can workers miss their heartbeat deadlines if CPU
> > > > loads
> > > > > > up
> > > > > > > to 80%-90%? Bursts of momentary overloads can be
> > > > > > >     expected behaviour as a normal part of system operations.
> > > > > > >  - Why do we decide that critical thread should monitor each other?
> > > > For
> > > > > > > instance, if all the tasks were blocked and unable to run,
> > > > > > >     node reset would never occur. As for me, a better solution is
> > > to
> > > > use
> > > > > > a
> > > > > > > separate monitor thread or pool (maybe both with software
> > > > > > >     and hardware checks) that not only checks heartbeats but
> > > > monitors the
> > > > > > > other system as well.
> > > > > > >
> > > > > > > On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]>
> > > > wrote:
> > > > > > >
> > > > > > > > It would be safer to restart the entire cluster than to remove
> > > the
> > > > last
> > > > > > > > node for a cache that should be redundant.
> > > > > > > >
> > > > > > > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]>
> > > wrote:
> > > > > > > >
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > I agree with Yakov that we can provide some option that manage
> > > > worker
> > > > > > > > > liveness checker behavior in case of observing that some worker
> > > > is
> > > > > > > > > blocked too long.
> > > > > > > > > At least it will  some workaround for cases when node fails is
> > > > too
> > > > > > > > > annoying.
> > > > > > > > >
> > > > > > > > > Backups count threshold sounds good but I don't understand how
> > > it
> > > > > > will
> > > > > > > > > help in case of cluster hanging.
> > > > > > > > >
> > > > > > > > > The simplest solution here is alert in cases of blocking of
> > > some
> > > > > > > > > critical worker (we can improve WorkersRegistry for this
> > > purpose
> > > > and
> > > > > > > > > expose list of blocked workers) and optionally call system
> > > > configured
> > > > > > > > > failure processor. BTW, failure processor can be extended in
> > > > order to
> > > > > > > > > perform any checks (e.g. backup count) and decide whether it
> > > > should
> > > > > > > > > stop node or not.
> > > > > > > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <
> > > > [hidden email]>
> > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > David, Yakov, I understand your fears. But liveness checks
> > > deal
> > > > > > with
> > > > > > > > > > _critical_ conditions, i.e. when such a condition is met we
> > > > > > conclude
> > > > > > > > the
> > > > > > > > > > node as totally broken, and there is no sense to keep it
> > > alive
> > > > > > > > regardless
> > > > > > > > > > the data it contains. If we want to give it a chance, then
> > > the
> > > > > > > > condition
> > > > > > > > > > (long fsync etc.) should not considered as critical at all.
> > > > > > > > > >
> > > > > > > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <
> > > > [hidden email]>:
> > > > > > > > > >
> > > > > > > > > > > Agree with David. We need to have an opporunity set backups
> > > > count
> > > > > > > > > threshold
> > > > > > > > > > > (at runtime also!) that will not allow any automatic stop
> > > if
> > > > > > there
> > > > > > > > > will be
> > > > > > > > > > > a data loss. Andrey, what do you think?
> > > > > > > > > > >
> > > > > > > > > > > --Yakov
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Best regards,
> > > > > > > > > >   Andrey Kuznetsov.
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > > --
> > > > > > > Maxim Muzafarov
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards,
> > > > >   Andrey Kuznetsov.
> > > >
> > >
> >
> >
> > --
> > Best regards,
> >   Andrey Kuznetsov.
>
>
>
> --
> Best Regards, Vyacheslav D.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

daradurvs
Andrey Gura, thank you for the answer!

I agree that wrapping of 'init' method reduces the profit of watchdog
service in case of PME worker, but in other cases, we should wrap all
possible long sections on GridDhtPartitionExchangeFuture. For example
'onCacheChangeRequest' method or
'cctx.affinity().onCacheChangeRequest' inside because it may take
significant time (reproducer attached).

I only want to point out a possible issue which may allow to end-user
halt the Ignite cluster accidentally.

I'm sure that PME experts know how to fix this issue properly.
On Wed, Sep 26, 2018 at 10:28 PM Andrey Gura <[hidden email]> wrote:

>
> Vyacheslav,
>
> Exchange worker is strongly tied with
> GridDhtPartitionExchangeFuture#init and it is ok. Exchange worker also
> shouldn't be blocked for long time but in reality it happens.It also
> means that your change doesn't make sense.
>
> What actually make sense it is identification of places which
> intentionally blocking. May be some places/actions should be braced by
> blocking guards.
>
> If you have failing tests please make sure that your failureHandler is
> NoOpFailureHandler or any other handler with ignoreFailureTypes =
> [CRITICAL_WORKER_BLOCKED].
>
>
> On Wed, Sep 26, 2018 at 9:43 PM Vyacheslav Daradur <[hidden email]> wrote:
> >
> > Hi Igniters!
> >
> > Thank you for this important improvement!
> >
> > I've looked through implementation and noticed that
> > GridDhtPartitionsExchangeFuture#init has not been wrapped in blocked
> > section. This means it easy to halt the node in case of longrunning
> > actions during PME, for example when we create a cache with
> > StoreFactrory which connect to 3rd party DB.
> >
> > I'm not sure that it is the right behavior.
> >
> > I filled the issue [1] and prepared the PR [2] with reproducer and possible fix.
> >
> > Andrey, could you please look at and confirm that it makes sense?
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-9710
> > [2] https://github.com/apache/ignite/pull/4845
> > On Mon, Sep 24, 2018 at 9:46 PM Andrey Kuznetsov <[hidden email]> wrote:
> > >
> > > Denis,
> > >
> > > I've created the ticket [1] with short description of the functionality.
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-9679
> > >
> > >
> > > пн, 24 сент. 2018 г. в 17:46, Denis Magda <[hidden email]>:
> > >
> > > > Andrey K. and G.,
> > > >
> > > > Thanks, do we have a documentation ticket created? Prachi (copied) can help
> > > > with the documentation.
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Mon, Sep 24, 2018 at 5:51 AM Andrey Gura <[hidden email]> wrote:
> > > >
> > > > > Andrey,
> > > > >
> > > > > finally your change is merged to master branch. Congratulations and
> > > > > thank you very much! :)
> > > > >
> > > > > I think that the next step is feature that will allow signal about
> > > > > blocked threads to the monitoring tools via MXBean.
> > > > >
> > > > > I hope you will continue development of this feature and provide your
> > > > > vision in new JIRA issue.
> > > > >
> > > > >
> > > > > On Tue, Sep 11, 2018 at 6:54 PM Andrey Kuznetsov <[hidden email]>
> > > > > wrote:
> > > > > >
> > > > > > David, Maxim!
> > > > > >
> > > > > > Thanks a lot for you ideas. Unfortunately, I can't adopt all of them
> > > > > right
> > > > > > now: the scope is much broader than the scope of the change I
> > > > implement.
> > > > > I
> > > > > > have had a talk to a group of Ignite commiters, and we agreed to
> > > > complete
> > > > > > the change as follows.
> > > > > > - Blocking instructions in system-critical which may resonably last
> > > > long
> > > > > > should be explicitly excluded from the monitoring.
> > > > > > - Failure handlers should have a setting to suppress some failures on
> > > > > > per-failure-type basis.
> > > > > > According to this I have updated the implementation: [1]
> > > > > >
> > > > > > [1] https://github.com/apache/ignite/pull/4089
> > > > > >
> > > > > > пн, 10 сент. 2018 г. в 22:35, David Harvey <[hidden email]>:
> > > > > >
> > > > > > > When I've done this before,I've needed to find the oldest  thread,
> > > > and
> > > > > kill
> > > > > > > the node running that.   From a language standpoint, Maxim's "without
> > > > > > > progress" better than "heartbeat".   For example, what I'm most
> > > > > interested
> > > > > > > in on a distributed system is which thread started the work it has
> > > > not
> > > > > > > completed the earliest, and when did that thread last make forward
> > > > > > > process.     You don't want to kill a node because a thread is
> > > > waiting
> > > > > on a
> > > > > > > lock held by a thread that went off-node and has not gotten a
> > > > response.
> > > > > > > If you don't understand the dependency relationships, you will make
> > > > > > > incorrect recovery decisions.
> > > > > > >
> > > > > > > On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <[hidden email]>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > I think we should find exact answers to these questions:
> > > > > > > >  1. What `critical` issue exactly is?
> > > > > > > >  2. How can we find critical issues?
> > > > > > > >  3. How can we handle critical issues?
> > > > > > > >
> > > > > > > > First,
> > > > > > > >  - Ignore uninterruptable actions (e.g. worker\service shutdown)
> > > > > > > >  - Long I/O operations (should be a configurable timeout for each
> > > > > type of
> > > > > > > > usage)
> > > > > > > >  - Infinite loops
> > > > > > > >  - Stalled\deadlocked threads (and\or too many parked threads,
> > > > > exclude
> > > > > > > I/O)
> > > > > > > >
> > > > > > > > Second,
> > > > > > > >  - The working queue is without progress (e.g. disco, exchange
> > > > > queues)
> > > > > > > >  - Work hasn't been completed since the last heartbeat (checking
> > > > > > > > milestones)
> > > > > > > >  - Too many system resources used by a thread for the long period
> > > > of
> > > > > time
> > > > > > > > (allocated memory, CPU)
> > > > > > > >  - Timing fields associated with each thread status exceeded a
> > > > > maximum
> > > > > > > time
> > > > > > > > limit.
> > > > > > > >
> > > > > > > > Third (not too many options here),
> > > > > > > >  - `log everything` should be the default behaviour in all these
> > > > > cases,
> > > > > > > > since it may be difficult to find the cause after the restart.
> > > > > > > >  - Wait some interval of time and kill the hanging node (cluster
> > > > > should
> > > > > > > be
> > > > > > > > configured stable enough)
> > > > > > > >
> > > > > > > > Questions,
> > > > > > > >  - Not sure, but can workers miss their heartbeat deadlines if CPU
> > > > > loads
> > > > > > > up
> > > > > > > > to 80%-90%? Bursts of momentary overloads can be
> > > > > > > >     expected behaviour as a normal part of system operations.
> > > > > > > >  - Why do we decide that critical thread should monitor each other?
> > > > > For
> > > > > > > > instance, if all the tasks were blocked and unable to run,
> > > > > > > >     node reset would never occur. As for me, a better solution is
> > > > to
> > > > > use
> > > > > > > a
> > > > > > > > separate monitor thread or pool (maybe both with software
> > > > > > > >     and hardware checks) that not only checks heartbeats but
> > > > > monitors the
> > > > > > > > other system as well.
> > > > > > > >
> > > > > > > > On Mon, 10 Sep 2018 at 00:07 David Harvey <[hidden email]>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > It would be safer to restart the entire cluster than to remove
> > > > the
> > > > > last
> > > > > > > > > node for a cache that should be redundant.
> > > > > > > > >
> > > > > > > > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <[hidden email]>
> > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > I agree with Yakov that we can provide some option that manage
> > > > > worker
> > > > > > > > > > liveness checker behavior in case of observing that some worker
> > > > > is
> > > > > > > > > > blocked too long.
> > > > > > > > > > At least it will  some workaround for cases when node fails is
> > > > > too
> > > > > > > > > > annoying.
> > > > > > > > > >
> > > > > > > > > > Backups count threshold sounds good but I don't understand how
> > > > it
> > > > > > > will
> > > > > > > > > > help in case of cluster hanging.
> > > > > > > > > >
> > > > > > > > > > The simplest solution here is alert in cases of blocking of
> > > > some
> > > > > > > > > > critical worker (we can improve WorkersRegistry for this
> > > > purpose
> > > > > and
> > > > > > > > > > expose list of blocked workers) and optionally call system
> > > > > configured
> > > > > > > > > > failure processor. BTW, failure processor can be extended in
> > > > > order to
> > > > > > > > > > perform any checks (e.g. backup count) and decide whether it
> > > > > should
> > > > > > > > > > stop node or not.
> > > > > > > > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <
> > > > > [hidden email]>
> > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > David, Yakov, I understand your fears. But liveness checks
> > > > deal
> > > > > > > with
> > > > > > > > > > > _critical_ conditions, i.e. when such a condition is met we
> > > > > > > conclude
> > > > > > > > > the
> > > > > > > > > > > node as totally broken, and there is no sense to keep it
> > > > alive
> > > > > > > > > regardless
> > > > > > > > > > > the data it contains. If we want to give it a chance, then
> > > > the
> > > > > > > > > condition
> > > > > > > > > > > (long fsync etc.) should not considered as critical at all.
> > > > > > > > > > >
> > > > > > > > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <
> > > > > [hidden email]>:
> > > > > > > > > > >
> > > > > > > > > > > > Agree with David. We need to have an opporunity set backups
> > > > > count
> > > > > > > > > > threshold
> > > > > > > > > > > > (at runtime also!) that will not allow any automatic stop
> > > > if
> > > > > > > there
> > > > > > > > > > will be
> > > > > > > > > > > > a data loss. Andrey, what do you think?
> > > > > > > > > > > >
> > > > > > > > > > > > --Yakov
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > Best regards,
> > > > > > > > > > >   Andrey Kuznetsov.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > > --
> > > > > > > > Maxim Muzafarov
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Best regards,
> > > > > >   Andrey Kuznetsov.
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > >   Andrey Kuznetsov.
> >
> >
> >
> > --
> > Best Regards, Vyacheslav D.



--
Best Regards, Vyacheslav D.
Reply | Threaded
Open this post in threaded view
|

Re: Critical worker threads liveness checking drawbacks

Mmuzaf
Folks,

I've found in `GridCachePartitionExchangeManager:2684` [1] (master branch)
exchange future wrapped
with double `blockingSectionEnd` method. Is it correct? I just want to
understand this change and
how should I use this in the future.

Should I file a new issue to fix this? I think here `blockingSectionBegin`
method should be used.

-------------
blockingSectionEnd();

try {
    resVer = exchFut.get(exchTimeout, TimeUnit.MILLISECONDS);
} finally {
    blockingSectionEnd();
}


[1]
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java#L2684

On Wed, 26 Sep 2018 at 22:47 Vyacheslav Daradur <[hidden email]> wrote:

> Andrey Gura, thank you for the answer!
>
> I agree that wrapping of 'init' method reduces the profit of watchdog
> service in case of PME worker, but in other cases, we should wrap all
> possible long sections on GridDhtPartitionExchangeFuture. For example
> 'onCacheChangeRequest' method or
> 'cctx.affinity().onCacheChangeRequest' inside because it may take
> significant time (reproducer attached).
>
> I only want to point out a possible issue which may allow to end-user
> halt the Ignite cluster accidentally.
>
> I'm sure that PME experts know how to fix this issue properly.
> On Wed, Sep 26, 2018 at 10:28 PM Andrey Gura <[hidden email]> wrote:
> >
> > Vyacheslav,
> >
> > Exchange worker is strongly tied with
> > GridDhtPartitionExchangeFuture#init and it is ok. Exchange worker also
> > shouldn't be blocked for long time but in reality it happens.It also
> > means that your change doesn't make sense.
> >
> > What actually make sense it is identification of places which
> > intentionally blocking. May be some places/actions should be braced by
> > blocking guards.
> >
> > If you have failing tests please make sure that your failureHandler is
> > NoOpFailureHandler or any other handler with ignoreFailureTypes =
> > [CRITICAL_WORKER_BLOCKED].
> >
> >
> > On Wed, Sep 26, 2018 at 9:43 PM Vyacheslav Daradur <[hidden email]>
> wrote:
> > >
> > > Hi Igniters!
> > >
> > > Thank you for this important improvement!
> > >
> > > I've looked through implementation and noticed that
> > > GridDhtPartitionsExchangeFuture#init has not been wrapped in blocked
> > > section. This means it easy to halt the node in case of longrunning
> > > actions during PME, for example when we create a cache with
> > > StoreFactrory which connect to 3rd party DB.
> > >
> > > I'm not sure that it is the right behavior.
> > >
> > > I filled the issue [1] and prepared the PR [2] with reproducer and
> possible fix.
> > >
> > > Andrey, could you please look at and confirm that it makes sense?
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-9710
> > > [2] https://github.com/apache/ignite/pull/4845
> > > On Mon, Sep 24, 2018 at 9:46 PM Andrey Kuznetsov <[hidden email]>
> wrote:
> > > >
> > > > Denis,
> > > >
> > > > I've created the ticket [1] with short description of the
> functionality.
> > > >
> > > > [1] https://issues.apache.org/jira/browse/IGNITE-9679
> > > >
> > > >
> > > > пн, 24 сент. 2018 г. в 17:46, Denis Magda <[hidden email]>:
> > > >
> > > > > Andrey K. and G.,
> > > > >
> > > > > Thanks, do we have a documentation ticket created? Prachi (copied)
> can help
> > > > > with the documentation.
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > > > On Mon, Sep 24, 2018 at 5:51 AM Andrey Gura <[hidden email]>
> wrote:
> > > > >
> > > > > > Andrey,
> > > > > >
> > > > > > finally your change is merged to master branch. Congratulations
> and
> > > > > > thank you very much! :)
> > > > > >
> > > > > > I think that the next step is feature that will allow signal
> about
> > > > > > blocked threads to the monitoring tools via MXBean.
> > > > > >
> > > > > > I hope you will continue development of this feature and provide
> your
> > > > > > vision in new JIRA issue.
> > > > > >
> > > > > >
> > > > > > On Tue, Sep 11, 2018 at 6:54 PM Andrey Kuznetsov <
> [hidden email]>
> > > > > > wrote:
> > > > > > >
> > > > > > > David, Maxim!
> > > > > > >
> > > > > > > Thanks a lot for you ideas. Unfortunately, I can't adopt all
> of them
> > > > > > right
> > > > > > > now: the scope is much broader than the scope of the change I
> > > > > implement.
> > > > > > I
> > > > > > > have had a talk to a group of Ignite commiters, and we agreed
> to
> > > > > complete
> > > > > > > the change as follows.
> > > > > > > - Blocking instructions in system-critical which may resonably
> last
> > > > > long
> > > > > > > should be explicitly excluded from the monitoring.
> > > > > > > - Failure handlers should have a setting to suppress some
> failures on
> > > > > > > per-failure-type basis.
> > > > > > > According to this I have updated the implementation: [1]
> > > > > > >
> > > > > > > [1] https://github.com/apache/ignite/pull/4089
> > > > > > >
> > > > > > > пн, 10 сент. 2018 г. в 22:35, David Harvey <
> [hidden email]>:
> > > > > > >
> > > > > > > > When I've done this before,I've needed to find the oldest
> thread,
> > > > > and
> > > > > > kill
> > > > > > > > the node running that.   From a language standpoint, Maxim's
> "without
> > > > > > > > progress" better than "heartbeat".   For example, what I'm
> most
> > > > > > interested
> > > > > > > > in on a distributed system is which thread started the work
> it has
> > > > > not
> > > > > > > > completed the earliest, and when did that thread last make
> forward
> > > > > > > > process.     You don't want to kill a node because a thread
> is
> > > > > waiting
> > > > > > on a
> > > > > > > > lock held by a thread that went off-node and has not gotten a
> > > > > response.
> > > > > > > > If you don't understand the dependency relationships, you
> will make
> > > > > > > > incorrect recovery decisions.
> > > > > > > >
> > > > > > > > On Mon, Sep 10, 2018 at 4:08 AM Maxim Muzafarov <
> [hidden email]>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > I think we should find exact answers to these questions:
> > > > > > > > >  1. What `critical` issue exactly is?
> > > > > > > > >  2. How can we find critical issues?
> > > > > > > > >  3. How can we handle critical issues?
> > > > > > > > >
> > > > > > > > > First,
> > > > > > > > >  - Ignore uninterruptable actions (e.g. worker\service
> shutdown)
> > > > > > > > >  - Long I/O operations (should be a configurable timeout
> for each
> > > > > > type of
> > > > > > > > > usage)
> > > > > > > > >  - Infinite loops
> > > > > > > > >  - Stalled\deadlocked threads (and\or too many parked
> threads,
> > > > > > exclude
> > > > > > > > I/O)
> > > > > > > > >
> > > > > > > > > Second,
> > > > > > > > >  - The working queue is without progress (e.g. disco,
> exchange
> > > > > > queues)
> > > > > > > > >  - Work hasn't been completed since the last heartbeat
> (checking
> > > > > > > > > milestones)
> > > > > > > > >  - Too many system resources used by a thread for the long
> period
> > > > > of
> > > > > > time
> > > > > > > > > (allocated memory, CPU)
> > > > > > > > >  - Timing fields associated with each thread status
> exceeded a
> > > > > > maximum
> > > > > > > > time
> > > > > > > > > limit.
> > > > > > > > >
> > > > > > > > > Third (not too many options here),
> > > > > > > > >  - `log everything` should be the default behaviour in all
> these
> > > > > > cases,
> > > > > > > > > since it may be difficult to find the cause after the
> restart.
> > > > > > > > >  - Wait some interval of time and kill the hanging node
> (cluster
> > > > > > should
> > > > > > > > be
> > > > > > > > > configured stable enough)
> > > > > > > > >
> > > > > > > > > Questions,
> > > > > > > > >  - Not sure, but can workers miss their heartbeat
> deadlines if CPU
> > > > > > loads
> > > > > > > > up
> > > > > > > > > to 80%-90%? Bursts of momentary overloads can be
> > > > > > > > >     expected behaviour as a normal part of system
> operations.
> > > > > > > > >  - Why do we decide that critical thread should monitor
> each other?
> > > > > > For
> > > > > > > > > instance, if all the tasks were blocked and unable to run,
> > > > > > > > >     node reset would never occur. As for me, a better
> solution is
> > > > > to
> > > > > > use
> > > > > > > > a
> > > > > > > > > separate monitor thread or pool (maybe both with software
> > > > > > > > >     and hardware checks) that not only checks heartbeats
> but
> > > > > > monitors the
> > > > > > > > > other system as well.
> > > > > > > > >
> > > > > > > > > On Mon, 10 Sep 2018 at 00:07 David Harvey <
> [hidden email]>
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > It would be safer to restart the entire cluster than to
> remove
> > > > > the
> > > > > > last
> > > > > > > > > > node for a cache that should be redundant.
> > > > > > > > > >
> > > > > > > > > > On Sun, Sep 9, 2018, 4:00 PM Andrey Gura <
> [hidden email]>
> > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi,
> > > > > > > > > > >
> > > > > > > > > > > I agree with Yakov that we can provide some option
> that manage
> > > > > > worker
> > > > > > > > > > > liveness checker behavior in case of observing that
> some worker
> > > > > > is
> > > > > > > > > > > blocked too long.
> > > > > > > > > > > At least it will  some workaround for cases when node
> fails is
> > > > > > too
> > > > > > > > > > > annoying.
> > > > > > > > > > >
> > > > > > > > > > > Backups count threshold sounds good but I don't
> understand how
> > > > > it
> > > > > > > > will
> > > > > > > > > > > help in case of cluster hanging.
> > > > > > > > > > >
> > > > > > > > > > > The simplest solution here is alert in cases of
> blocking of
> > > > > some
> > > > > > > > > > > critical worker (we can improve WorkersRegistry for
> this
> > > > > purpose
> > > > > > and
> > > > > > > > > > > expose list of blocked workers) and optionally call
> system
> > > > > > configured
> > > > > > > > > > > failure processor. BTW, failure processor can be
> extended in
> > > > > > order to
> > > > > > > > > > > perform any checks (e.g. backup count) and decide
> whether it
> > > > > > should
> > > > > > > > > > > stop node or not.
> > > > > > > > > > > On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov <
> > > > > > [hidden email]>
> > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > David, Yakov, I understand your fears. But liveness
> checks
> > > > > deal
> > > > > > > > with
> > > > > > > > > > > > _critical_ conditions, i.e. when such a condition is
> met we
> > > > > > > > conclude
> > > > > > > > > > the
> > > > > > > > > > > > node as totally broken, and there is no sense to
> keep it
> > > > > alive
> > > > > > > > > > regardless
> > > > > > > > > > > > the data it contains. If we want to give it a
> chance, then
> > > > > the
> > > > > > > > > > condition
> > > > > > > > > > > > (long fsync etc.) should not considered as critical
> at all.
> > > > > > > > > > > >
> > > > > > > > > > > > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov <
> > > > > > [hidden email]>:
> > > > > > > > > > > >
> > > > > > > > > > > > > Agree with David. We need to have an opporunity
> set backups
> > > > > > count
> > > > > > > > > > > threshold
> > > > > > > > > > > > > (at runtime also!) that will not allow any
> automatic stop
> > > > > if
> > > > > > > > there
> > > > > > > > > > > will be
> > > > > > > > > > > > > a data loss. Andrey, what do you think?
> > > > > > > > > > > > >
> > > > > > > > > > > > > --Yakov
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > Best regards,
> > > > > > > > > > > >   Andrey Kuznetsov.
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > > --
> > > > > > > > > Maxim Muzafarov
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Best regards,
> > > > > > >   Andrey Kuznetsov.
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > >   Andrey Kuznetsov.
> > >
> > >
> > >
> > > --
> > > Best Regards, Vyacheslav D.
>
>
>
> --
> Best Regards, Vyacheslav D.
>
--
--
Maxim Muzafarov
12