User Tools

Site Tools


wikiv3:apache-ah

Cluster Apache Active/Passive

Cenário:

  • node01 - 192.0.2.12/24
  • node02 - 192.0.2.13/24
  • ip virtual - 192.0.2.14/24

node01

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.0.2.12      node01.exemplo.org      node01
192.0.2.13      node02.exemplo.org      node02
192.0.2.14      site.exemplo.org        site

node02

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.0.2.12      node01.exemplo.org      node01
192.0.2.13      node02.exemplo.org      node02
192.0.2.14      site.exemplo.org        site

Instalando o apache

root@node01:~# yum install httpd -y
root@node02:~# yum install httpd -y

Página de status do apache

root@node01:~# cat /etc/httpd/conf.d/status.conf
Listen 127.0.0.1:80
<Location /server-status>
SetHandler server-status
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
</Location>
root@node02:~# cat /etc/httpd/conf.d/status.conf
Listen 127.0.0.1:80
<Location /server-status>
SetHandler server-status
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
</Location>

Comentando o Listen na configuração do apache

root@node01:~# sed -i 's/Listen/#Listen/' /etc/httpd/conf/httpd.conf
root@node02:~# sed -i 's/Listen/#Listen/' /etc/httpd/conf/httpd.conf

Iniciando o apache e testando

root@node01:~# systemctl start httpd.service
root@node01:~# wget http://127.0.0.1/server-status
--2016-02-19 12:24:13--  http://127.0.0.1/server-status
Conectando-se a 127.0.0.1:80... conectado.
A requisição HTTP foi enviada, aguardando resposta... 200 OK
Tamanho: 2741 (2,7K) [text/html]
Salvando em: “server-status”
 
100%[=============================================================>] 2.741       --.-K/s   em 0s      
 
2016-02-19 12:24:13 (170 MB/s) - “server-status” salvo [2741/2741]
root@node02:~# systemctl start httpd.service
root@node02:~# wget http://127.0.0.1/server-status
--2016-02-19 12:38:42--  http://127.0.0.1/server-status
Conectando-se a 127.0.0.1:80... conectado.
A requisição HTTP foi enviada, aguardando resposta... 200 OK
Tamanho: 2751 (2,7K) [text/html]
Salvando em: “server-status”
 
100%[=============================================================>] 2.751       --.-K/s   em 0s      
 
2016-02-19 12:38:42 (290 MB/s) - “server-status” salvo [2751/2751]

Página de teste

root@node01:~# cat /var/www/html/index.html
<!DOCTYPE html>
<html lang="pt-br">
<head>
	<meta charset="UTF-8"/>
	<title>Apache HA</title>
</head>
<body>
	<h1>node01</h1>
</body>
</html>
root@node02:~# cat /var/www/html/index.html
<!DOCTYPE html>
<html lang="pt-br">
<head>
	<meta charset="UTF-8"/>
	<title>Apache HA</title>
</head>
<body>
	<h1>node02</h1>
</body>
</html>

Configurando o Listen da configuração do Apache para “escutar” no ip virtual

root@node01:~# systemctl stop httpd
root@node02:~# systemctl stop httpd
root@node01:~# echo "Listen 192.0.2.14:80" | tee --append /etc/httpd/conf/httpd.conf
root@node02:~# echo "Listen 192.0.2.14:80" | tee --append /etc/httpd/conf/httpd.conf

Instalando o Pacemaker e o Corosync

root@node01:~# yum install pcs
root@node02:~# yum install pcs

Atribuindo uma senha para o usuário hacluster

root@node01:~# getent passwd hacluster
hacluster:x:189:189:cluster user:/home/hacluster:/sbin/nologin
root@node02:~# getent passwd hacluster
hacluster:x:189:189:cluster user:/home/hacluster:/sbin/nologin

FIXME Usuário usado para gerenciar os nós do cluster

root@node01:~# passwd hacluster
root@node02:~# passwd hacluster

Configurando o Pacemaker

Liberando as portas no firewall

root@node01:~# firewall-cmd --permanent --add-service=high-availability
success
root@node01:~# firewall-cmd --reload
success
root@node02:~# firewall-cmd --permanent --add-service=high-availability
success
root@node02:~# firewall-cmd --reload
success

Autenticação entre os nós

root@node01:~# systemctl start pcsd.service
root@node02:~# systemctl start pcsd.service
root@node01:~# pcs cluster auth node01 node02
Username: hacluster
Password: 
node02: Authorized
node01: Authorized

FIXME Executar apenas no nós que fará o gerenciamento do cluster

Criando o cluster e adicionando os nós

root@node01:~# pcs cluster setup --name cluster_web node01 node02
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
node01: Succeeded
node02: Succeeded
Synchronizing pcsd certificates on nodes node01, node02...
node02: Success
node01: Success
 
Restaring pcsd on the nodes in order to reload the certificates...
node02: Success
node01: Success
root@node01:~# cat /etc/corosync/corosync.conf
totem {
    version: 2
    secauth: off
    cluster_name: cluster_web
    transport: udpu
}
 
nodelist {
    node {
        ring0_addr: node01
        nodeid: 1
    }
 
    node {
        ring0_addr: node02
        nodeid: 2
    }
}
 
quorum {
    provider: corosync_votequorum
    two_node: 1
}
 
logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
}
root@node02:~# cat /etc/corosync/corosync.conf
totem {
    version: 2
    secauth: off
    cluster_name: cluster_web
    transport: udpu
}
 
nodelist {
    node {
        ring0_addr: node01
        nodeid: 1
    }
 
    node {
        ring0_addr: node02
        nodeid: 2
    }
}
 
quorum {
    provider: corosync_votequorum
    two_node: 1
}
 
logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
}

Iniciando o cluster

root@node01:~# pcs cluster start --all
node01: Starting Cluster...
node02: Starting Cluster...

Status do cluster

root@node01:~# pcs status cluster
Cluster Status:
 Last updated: Fri Feb 19 13:25:39 2016		Last change: Fri Feb 19 13:23:30 2016 by hacluster via crmd on node02
 Stack: corosync
 Current DC: node02 (version 1.1.13-10.el7-44eb2dd) - partition with quorum
 2 nodes and 0 resources configured
 Online: [ node01 node02 ]
 
PCSD Status:
  node01: Online
  node02: Online
root@node01:~# pcs status nodes
Pacemaker Nodes:
 Online: node01 node02 
 Standby: 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Offline: 
root@node01:~# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.0.2.12) 
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.0.2.13) 
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
root@node01:~# pcs status corosync
 
Membership information
----------------------
    Nodeid      Votes Name
         1          1 node01 (local)
         2          1 node02

Configurando o cluster

Verificando se existe erros

root@node01:~# crm_verify -L -V
   error: unpack_resources:	Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources:	Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid

Na mensagem acima encontramos um erro no STONITH, como estamos usando um cluster com apenas dois nós, vamos desativar o opção STONITH.

<filhe bash> root@node01:~# pcs property set stonith-enabled=false </file>

Vamos “falar” para o Pacemaker para ignorar o quorum também

root@node01:~# pcs property set no-quorum-policy=ignore
root@node01:~# pcs property
Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: cluster_web
 dc-version: 1.1.13-10.el7-44eb2dd
 have-watchdog: false
 no-quorum-policy: ignore
 stonith-enabled: false

Adicionando recurso ao cluster

Virtual IP

root@node01:~# pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.0.2.14 cidr_netmask=32 op monitor interval=30s

<File bash> root@node01:~# pcs status resources virtual_ip (ocf::heartbeat:IPaddr2): Started node01 </file>

root@node01:~# ping -c2 192.0.2.14
PING 192.0.2.14 (192.0.2.14) 56(84) bytes of data.
64 bytes from 192.0.2.14: icmp_seq=1 ttl=64 time=0.043 ms
64 bytes from 192.0.2.14: icmp_seq=2 ttl=64 time=0.048 ms
 
--- 192.0.2.14 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.043/0.045/0.048/0.007 ms

Adicionando recurso para o servidor web

root@node01:~# pcs resource create webserver ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" op monitor interval=1min

Garantindo que o ip virtual e o servidor web inicie na no mesmo nó

root@node01:~# pcs constraint colocation add webserver virtual_ip INFINITY

Fazendo com que o ip virtual inicie primeiro que o servidor web

root@node01:~# pcs constraint order virtual_ip then webserver
Adding virtual_ip webserver (kind: Mandatory) (Options: first-action=start then-action=start)

Dando preferência há um nó - caso ele tenha mais recurso.

root@node01:~# pcs constraint location webserver prefers node01=50
root@node01:~# pcs constraint
Location Constraints:
  Resource: webserver
    Enabled on: node01 (score:50)
Ordering Constraints:
  start virtual_ip then start webserver (kind:Mandatory)
Colocation Constraints:
  webserver with virtual_ip (score:INFINITY)

Restartando o cluster e checando o status

root@node01:~# pcs cluster stop --all
node02: Stopping Cluster (pacemaker)...
node01: Stopping Cluster (pacemaker)...
node01: Stopping Cluster (corosync)...
node02: Stopping Cluster (corosync)...
 
root@node01:~# pcs cluster start --all
node01: Starting Cluster...
node02: Starting Cluster...
root@node01:~# pcs status
Cluster name: cluster_web
Last updated: Fri Feb 19 16:10:49 2016		Last change: Fri Feb 19 16:07:00 2016 by root via cibadmin on node01
Stack: corosync
Current DC: node01 (version 1.1.13-10.el7-44eb2dd) - partition with quorum
2 nodes and 2 resources configured
 
Online: [ node01 node02 ]
 
Full list of resources:
 
 virtual_ip	(ocf::heartbeat:IPaddr2):	Started node01
 webserver	(ocf::heartbeat:apache):	Started node01
 
PCSD Status:
  node01: Online
  node02: Online
 
Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/disabled

Testando a High Availability

root@node01:~# firewall-cmd --add-service=http
root@gateway:~# lynx 192.0.2.14
                                                                                              Apache HA
                                                node01
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Comandos: Use as setas para mover,'?' ajuda, 'q' sair, '<-' voltar.
Setas para cima/baixo move.A direita segue um link; A esquerda para voltar.
H)Ajuda O)Opções P)Imprimir G)Segue M)Principal Q)Sair /=procura [delete]=Histórico 

Parando o node01

root@node01:~# pcs cluster stop node01
node01: Stopping Cluster (pacemaker)...
node01: Stopping Cluster (corosync)...
root@node02:~# pcs status
Cluster name: cluster_web
Last updated: Fri Feb 19 16:22:03 2016		Last change: Fri Feb 19 16:07:00 2016 by root via cibadmin on node01
Stack: corosync
Current DC: node02 (version 1.1.13-10.el7-44eb2dd) - partition with quorum
2 nodes and 2 resources configured
 
Online: [ node02 ]
OFFLINE: [ node01 ]
 
Full list of resources:
 
 virtual_ip	(ocf::heartbeat:IPaddr2):	Started node02
 webserver	(ocf::heartbeat:apache):	Started node02
 
PCSD Status:
  node01: Online
  node02: Online
 
Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/disabled
root@node02:~# firewall-cmd --add-service=http
root@gateway:~# lynx 192.0.2.14
                                                                                             Apache HA
                                                node02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Comandos: Use as setas para mover,'?' ajuda, 'q' sair, '<-' voltar.
Setas para cima/baixo move.A direita segue um link; A esquerda para voltar.
H)Ajuda O)Opções P)Imprimir G)Segue M)Principal Q)Sair /=procura [delete]=Histórico 

Colocando os serviços na inicialização do sistema

root@node01:~# systemctl enable pcsd
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
root@node01:~# systemctl enable corosync
Created symlink from /etc/systemd/system/multi-user.target.wants/corosync.service to /usr/lib/systemd/system/corosync.service.
root@node01:~# systemctl enable pacemaker
Created symlink from /etc/systemd/system/multi-user.target.wants/pacemaker.service to /usr/lib/systemd/system/pacemaker.service.
root@node02:~# systemctl enable pcsd
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
root@node02:~# systemctl enable corosync
Created symlink from /etc/systemd/system/multi-user.target.wants/corosync.service to /usr/lib/systemd/system/corosync.service.
root@node02:~# systemctl enable pacemaker
Created symlink from /etc/systemd/system/multi-user.target.wants/pacemaker.service to /usr/lib/systemd/system/pacemaker.service.

Referências:

wikiv3/apache-ah.txt · Last modified: by 127.0.0.1