User Tools

Site Tools


docker_cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
docker_cluster [2024/10/20 23:30] neoondocker_cluster [2024/10/21 15:29] (current) neoon
Line 14: Line 14:
   DEBARCH=$(dpkg --print-architecture)   DEBARCH=$(dpkg --print-architecture)
   echo "deb [signed-by=/usr/share/keyrings/glusterfs-archive-keyring.gpg] https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main" | sudo tee /etc/apt/sources.list.d/gluster.list   echo "deb [signed-by=/usr/share/keyrings/glusterfs-archive-keyring.gpg] https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main" | sudo tee /etc/apt/sources.list.d/gluster.list
-  apt-get update && apt-get install glusterfs-server+  apt-get update && apt-get install glusterfs-server -y
  
 **3**. Edit /etc/glusterfs/glusterd.vol and add \\ **3**. Edit /etc/glusterfs/glusterd.vol and add \\
 This will prevent glusterfs from getting exposed to the dangerous interwebs. This will prevent glusterfs from getting exposed to the dangerous interwebs.
   option transport.socket.bind-address 10.0.X.1   option transport.socket.bind-address 10.0.X.1
 +  
 +rpcbind does listen on the network and we don't need it, so lets get rid of it.
 +  apt-get remove rpcbind -y
  
 **4**. Enable GlusterFS **4**. Enable GlusterFS
  
-  systemctl start glusterd +  systemctl start glusterd && systemctl enable glusterd
-  systemctl enable glusterd+
  
 **5**. Peer with your GlusterFS nodes **5**. Peer with your GlusterFS nodes
Line 33: Line 35:
   gluster peer status   gluster peer status
  
-**7.** Create your first volume for Docker +**7.** Folders for the mount 
-  mkdir -p /mnt/bricks/docker+  mkdir -p /mnt/bricks/docker &&   mkdir -p /mnt/data/docker 
 + 
 +**8.** Create your first volume for Docker
   gluster volume create docker replica 3 10.0.1.1:/mnt/bricks/docker 10.0.2.1:/mnt/bricks/docker 10.0.3.1:/mnt/bricks/docker force   gluster volume create docker replica 3 10.0.1.1:/mnt/bricks/docker 10.0.2.1:/mnt/bricks/docker 10.0.3.1:/mnt/bricks/docker force
 +  gluster volume start docker
  
-**8**. Mount your first volume +**9**. Mount your first volume
-  mkdir -p /mnt/data/docker+
   mount.glusterfs 10.0.X.1:/docker /mnt/data/docker   mount.glusterfs 10.0.X.1:/docker /mnt/data/docker
      
-**9**. Make the mount boot ready+**10**. Make the mount boot ready
  
   [Unit]   [Unit]
Line 58: Line 62:
 Copy this to /etc/systemd/system/mounts.service Copy this to /etc/systemd/system/mounts.service
  
-**10**. Enable the mount service+**11**. Enable the mount service
   systemctl enable mounts   systemctl enable mounts
      
-**11**. You may have to edit the GlusterFS systemd file to prevent a race condition with your VPN. \\+**12**. You may have to edit the GlusterFS systemd file to prevent a race condition with your VPN. \\
 GlusterFS will fail to start if your VPN isn't running already. GlusterFS will fail to start if your VPN isn't running already.
  
Line 71: Line 75:
 Profit! Next reboot GlusterFS should start up fine. Profit! Next reboot GlusterFS should start up fine.
  
-**12**. Install Docker+**13**. Install Docker
  
   # Add Docker's official GPG key:   # Add Docker's official GPG key:
Line 84: Line 88:
   apt-get update && apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y   apt-get update && apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
  
-**13**. Init the Swarm on the first Node+**14**. Init the Swarm on the first Node
   docker swarm init --advertise-addr 10.0.1.1 --listen-addr=10.0.1.1   docker swarm init --advertise-addr 10.0.1.1 --listen-addr=10.0.1.1
 advertise-addr will only advertise the swarm inside our VPN network   advertise-addr will only advertise the swarm inside our VPN network  
  
-**14**. Join other Nodes+**15**. Join other Nodes
   docker swarm join --token whateverthattokenis 10.0.1.1:2377 --listen-addr=10.0.2.1   docker swarm join --token whateverthattokenis 10.0.1.1:2377 --listen-addr=10.0.2.1
   docker swarm join --token whateverthattokenis 10.0.1.1:2377 --listen-addr=10.0.3.1   docker swarm join --token whateverthattokenis 10.0.1.1:2377 --listen-addr=10.0.3.1
 listen-addr will force swarm to bind to your local VPN listen-addr will force swarm to bind to your local VPN
  
-**15**. Promote the other Nodes to archive 100% True HA+**16**. Check the Cluster 
 +  docker node ls 
 + 
 +**17**. Promote the other Nodes to archive 100% True HA
   docker node promote node2   docker node promote node2
   docker node promote node3   docker node promote node3
      
-**16**. Deploy your first service \\+**18**. Deploy your first service \\
 In my case it was a ZNC bouncer. \\ In my case it was a ZNC bouncer. \\
 Had to run the docker container normally to generate the config files. \\ Had to run the docker container normally to generate the config files. \\
Line 105: Line 112:
 Lets deploy the service. Lets deploy the service.
   docker service create --mount type=bind,src=/mnt/data/docker/znc/,dst=/znc-data --publish published=1025,target=1025 --name bouncer znc   docker service create --mount type=bind,src=/mnt/data/docker/znc/,dst=/znc-data --publish published=1025,target=1025 --name bouncer znc
 +The service will get exposed on port 1025 on all nodes.
      
-**17**. If you run this, on any node.+**19**. If you run this, on any node.
   docker node ps $(docker node ls -q)   docker node ps $(docker node ls -q)
      
 You should be able to check your container status. You should be able to check your container status.
  
 +**20**. When you reboot the node with your container, the service should be restored in about 60s.
docker_cluster.1729467033.txt.gz · Last modified: 2024/10/20 23:30 by neoon

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki