Tomcat自带的会话保持集群配置
本实验环境均为centos7.2
四台服务器
tomcat1 : 192.168.153.112
tomcat2 : 192.168.153.113
nginx : 192.168.153.111
client
nginx配置
1,先安装nginx
yum install nginx -y
2,配置nginx
vim /etc/nginx/nginx.conf
vim /etc/nginx/conf.d/default.conf
3,检查配置文件并启动服务
nginx -t systemctl start nginx ss -ntl
tomcat配置
1,先安装tomcat
yum install tomcat -y
2,配置server
在<Engine …> 标签下面一行加入如下代码
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8"> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.4" port="45564" frequency="500" #每隔500ms传递一下心跳信息,(请删除该注释,配置不支持) dropTime="3000"/> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="192.168.153.112" #该服务器的ip (请删除该注释,配置不支持) port="4000" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster>
3,编辑测试页面
mkdir /usr/share/tomcat/webapps/ROOT/WEB-INF -pv cp /etc/tomcat/web.xml /usr/share/tomcat/webapps/ROOT/WEB-INF/
配置web.xml
vim /usr/share/tomcat/webapps/ROOT/WEB-INF/web.xml
<distributable/>
vim /usr/share/tomcat/webapps/ROOT/test.jsp
<%@ page language="java" %> <html> this istomcatA: <% session.setAttribute("nineven.com","nineven.com"); %> <%=session.getId() %> </html>
4,启动服务
systemctl start tomcat
5,测试
附录:
Simply add
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
to your <Engine> or your <Host> element to enable clustering.
Using the above configuration will enable all-to-all session replication using the DeltaManager to replicate session deltas. By all-to-all we mean that the session gets replicated to all the other nodes in the cluster. This works great for smaller cluster but we don't recommend it for larger clusters(a lot of Tomcat nodes). Also when using the delta manager it will replicate to all nodes, even nodes that don't have the application deployed.
To get around this problem, you'll want to use the BackupManager. This manager only replicates the session data to one backup node, and only to nodes that have the application deployed. Downside of the BackupManager: not quite as battle tested as the delta manager.
Here are some of the important default values:
Multicast address is 228.0.0.4
Multicast port is 45564 (the port and the address together determine cluster membership.
The IP broadcasted is java.net.InetAddress.getLocalHost().getHostAddress() (make sure you don't broadcast 127.0.0.1, this is a common error)
The TCP port listening for replication messages is the first available server socket in range 4000-4100
Listener is configured ClusterSessionListener
Two interceptors are configured TcpFailureDetector and MessageDispatch15Interceptor
The following is the default cluster configuration:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8"> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.4" port="45564" frequency="500" #每隔500ms传递一下心跳信息 dropTime="3000"/> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto" port="4000" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster>
Will cover this section in more detail later in this document.
ter Basics
To run session replication in your Tomcat 7.0 container, the following steps should be completed:
All your session attributes must implement java.io.Serializable
Uncomment the Cluster element in server.xml
If you have defined custom cluster valves, make sure you have the ReplicationValve defined as well under the Cluster element in server.xml
If your Tomcat instances are running on the same machine, make sure the Receiver.port attribute is unique for each instance, in most cases Tomcat is smart enough to resolve this on it's own by autodetecting available ports in the range 4000-4100
Make sure your web.xml has the <distributable/> element
If you are using mod_jk, make sure that jvmRoute attribute is set at your Engine <Engine name="Catalina" jvmRoute="node01" > and that the jvmRoute attribute value matches your worker name in workers.properties
Make sure that all nodes have the same time and sync with NTP service!
Make sure that your loadbalancer is configured for sticky session mode.