![]() Nov 27 14:04:13 2018 fw02 kernel: check_other_machine_activity: Update state of member id 0 to DEAD, didn't hear from it since 403715.3 and now 403718.3 Nov 27 14:04:12 2018 fw02 kernel: fwha_state_change_implied: Try to update state to ACTIVE because member is down (the change may not be allowed). Nov 27 14:04:12 2018 fw02 kernel: FW-1: fwha_update_local_state: Local machine state changed to DOWN Nov 27 14:04:12 2018 fw02 kernel: FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to DOWN Nov 27 14:04:12 2018 fw02 kernel: fwha_report_id_problem_status: Try to update state to DOWN due to pnote Interface Active Check (desc eth8 interface is down, 8 interfaces required, only 7 up) Changed the CCP mode from multicast to broadcast.Īfter we installed the take_154 jumbo hotfix, problems with the VPNs have been resolved.īut the synchronization interface is still flapping and the cluster members continues trying to change the state.įlapping of sync interface + Cluster's members trying change the state: Switched the cable that connected the members of cluster Altered the syncronization interface from SYNC to ETH8 OBS: Until now, I did some configurations, like: Have someone any idea what can cause this behavior? Sep 27 13:37:12 2018 fw02 kernel: FW-1: fwha_update_local_state: Local machine state changed to STANDBY Sep 27 13:37:12 2018 fw02 kernel: FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to STANDBY Sep 27 13:37:12 2018 fw02 kernel: fwha_set_backup_mode: Try to update local state to STANDBY because of ID 0 is ACTIVE or READY and with higher priority Sep 27 13:37:12 2018 fw02 kernel: FW-1: fwha_process_state_msg: Update state of member id 0 to ACTIVE due to the member report message Sep 27 13:37:12 2018 fw02 kernel: fwha_report_id_problem_status: Try to update state to ACTIVE due to pnote Interface Active Check (desc ) Sep 27 13:37:11 2018 fw02 kernel: FW-1: fwha_update_local_state: Local machine state changed to ACTIVE Sep 27 13:37:11 2018 fw02 kernel: FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to ACTIVE Sep 27 13:37:11 2018 fw02 kernel: FW-1: fwha_state_change_implied: Try to update local state from READY to ACTIVE because all other machines confirmed my READY state ![]() Sep 27 13:37:11 2018 fw02 kernel: FW-1: fwha_update_local_state: Local machine state changed to READY Sep 27 13:37:11 2018 fw02 kernel: FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to READY (This attempt may be blocked by other machines) Sep 27 13:37:11 2018 fw02 kernel: fwha_set_backup_mode: Try to update local state to ACTIVE because of ID 0 is not ACTIVE or READY. Sep 27 13:37:11 2018 fw02 kernel: check_other_machine_activity: Update state of member id 0 to DEAD, didn't hear from it since 2021025.4 and now 2021028.4 Sep 27 13:37:10 2018 fw02 kernel: fwha_state_change_implied: Try to update state to ACTIVE because member is down (the change may not be allowed). Sep 27 13:37:10 2018 fw02 kernel: FW-1: fwha_update_local_state: Local machine state changed to DOWN Sep 27 13:37:10 2018 fw02 kernel: FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to DOWN Sep 27 13:37:10 2018 fw02 kernel: fwha_report_id_problem_status: Try to update state to DOWN due to pnote Interface Active Check ( desc eth8 interface is down, 8 interfaces required, only 7 up) In the /var/log/messages I get always the same log strcture: And in most of the time, some VPNs falling down in same minute. The issue is that, my customer have a cluster 80.10 (appliance model 5800 in HA mode), where the syncronization interface between the members is through cable.Įveryday the sync interface flapping and the member 2 (in Standby) try to assume the Active state of the cluster. Therefore I decided hear others opinions for while.hahah Currently, I have one ticket opened in TAC for this case, but till now nothing.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |