Thanks Denis,
We do the configuration via Java, so I hope, that the configuration from the log contains all the info required:
-3.94.163.90, , , , , , , , , , , , , =/opt/ignite, =/opt/ignite/work, sunJmxMBeanServer@548c4f57, -a010-4f01-9e73-cb5f7162ca5c, [], , , , , , , , , ,
[, , , , , , , , , ], , , , , ,
[, apacheignitecommunicationtcpTcpCommunicationSpi$FirstConnectionPolicy@546e5e58, , , , , , , =-1, , , , , , , , , , , , , , , , , , , , =-1, =-1, , , , concurrentCountDownLatch@630dd1a6Count = 1], ],
apacheigniteNoopEventStorageSpi@6b4b15b2, [], [], apacheigniteindexingnoopNoopIndexingSpi@28f3e636, , apacheigniteencryptionnoopNoopEncryptionSpi@6555f629, , ,
[, , , , , , , , , ],
, , , , , , , , , , , ,
[, , port=11211, , , , , , , , , , , , , , , , ],
, ,
[, , backups=1, , ],
, , , , , ,
, , , ,
[, , , , , .9, , , , , , ],
, , , , , , , , , , /, //archive, , , , , , , , , apacheigniteinternalprocessorscachepersistencefileAsyncFileIOFactory@19ba6b94, , , =-1, , , , ,
, , , ,
[, port=10800, , , , , , , , , , , , , , ],
, , , ,
I attached two log files - one from the first instance the second from the new instance.
To verify we went back from Gridgain 8.7.10 to Ignite 2.7.6 - but exactly the same behavior.
The CacheConfigurations are
set via:
private CacheConfiguration @Nonnull CacheConfiguration ) {
REPLICATED);
1);
ATOMIC);
SYNC);
0);
true);
return ;
}
Looking at the Gridgain Web Console, I noticed that the new instance is not "in baseline" (see attachment)
and that only the first instance has persistent storage.
In the doc, it is mentioned, that for replication/rebalancing the nodes need to be in the same baseline without mentioning how to do that
(I hoped that that is automatic). On the IgniteCluster, there are some topology methods - do I need to update that from the application?
Thanks again!
------------------------------
Jorg
------------------------------
Original Message:
Sent: 01-07-2020 04:12 PM
From: Denis Magda
Subject: How can I determine if/that a node is replicated??
Hi Jorg,
Ignite behaves similarly in the embedded mode. The only implication is that you'll be restarting a server node each time you need to redeploy your application (Tomcat instance) triggering rebalancing. You are just coupling storage maintenance (Ignite) and apps maintenance lifecycle, but this might be perfectly fine for your use case.
As for the replicated caches, those are still partitioned caches in a sense that one node will store a primary copy of a partition while all the others will keep a backup one. By default, all requests still will be routed to a node that holds the primary copy, and for key-value APIs you can use readFromBackups
parameter to allow reading from the backup copies of other Tomcat instances. However, this parameter has no effect for SQL queries that are executed only via primary copies of the partitions. This page explains partitioned caches in more detail.
Please attach all the log files you have and Ignite configuration so that either I or my colleagues can check what's going on with your cluster.
------------------------------
Denis
Original Message:
Sent: 01-07-2020 03:07 PM
From: Jorg Janke
Subject: How can I determine if/that a node is replicated??
Thanks Denis,
It seems that the rebalancing process is not started. For both - the old and the new server the
RebalancingStartTime = -1 (assuming that the rebalancing process does also the replication).
Both servers have Ignition.isClientMode = false and .isDeamon = false with the Ignition.state(nodeName) as STARTED
There is also no thread with something like "rebalance" on either machine.
In the log of the new instance, I found:
[2020-01-07T19:36:45,997][INFO ][exchange-worker-#39%blue-54.158.100.161%][GridCachePartitionExchangeManager] Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=2, minorTopVer=0], force=false, evt=NODE_JOINED, node=57bc10fe-1505-4e8e-9987-52c9c903c6ef]
What do I need to do to schedule the rebalancing?
It seems you suggest to use dedicated servers - although the Embedded configuration seems to support exactly the use case.
The general use case is to use Ignite as Tomcat Cache. Here you want to maintain the cache regardless of how many nodes there are and the initial are dropped.
Is it an invalid assumption that embedded Ignite instances maintain/transfer state between instances?
------------------------------
Jorg
Original Message:
Sent: 01-06-2020 01:11 PM
From: Denis Magda
Subject: How can I determine if/that a node is replicated??
Jorg, thanks for the details. I would recommend reviewing "Client-Server" deployment option as well. That mode can simplify the maintenance of the Ignite/GridGain cluster once you are in production.
------------------------------
Denis
Original Message:
Sent: 01-06-2020 01:01 PM
From: Jorg Janke
Subject: How can I determine if/that a node is replicated??
Thanks Denis - will try to get the metrics.
It is an embedded instance (AWS Elastic Beanstalk - Tomcat) with 1-4 notes based on demand.
as the in AWS may kill any instance - usually the oldest one - so to keep the it needs to . The data volume might be 500MB.
------------------------------
Jorg
Original Message:
Sent: 01-06-2020 12:16 PM
From: Denis Magda
Subject: How can I determine if/that a node is replicated??
Most likely the data rebalancing didn't finish by the time you shut down the node. Use these metrics to monitor the rebalancing process.
Also, could you tell a bit more about your configuration - how many nodes you are targeting to have, how much data to store and why the caches are replicated and not partitioned?
------------------------------
Denis
Original Message:
Sent: 01-05-2020 03:09 AM
From: Jorg Janke
Subject: How can I determine if/that a node is replicated??
... sorry fails with
.sql: Failed to find data nodes for cache:
... very odd the editor removed the exception text -
Original Message:
Sent: 01-05-2020 02:04 AM
From: Jorg Janke
Subject: How can I determine if/that a
- All Cache, instances and SQL tables with REPLICATED
- Instance replacements in AWS - a new instance -
- Both instances are available for about 5 minutes. System works fine.
- After the SQL Queries fail with
Failed to map keys for (all partition nodes left the grid).
- Even after a few hours to have both instances up to the second instance.
The e.g.
[2020-01-04T23:03:01,461INFOexchange-worker-#39blue-3.208.31.170GridCacheProcessor] Started cache id=-2066699933, backups=2147483647,,
(1) How can I monitor that the cache instances and tables to the new node?
(2) Is there a way to say that replication is #1 priority for the new node?
(3) per SQL table it has to WITH "
Is there a way to set this and other "WITH" parts for tables as default?
------------------------------
Thanks - Jorg
------------------------------