Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(fleet/rook-ceph-conf) add rsphome export for manke #581

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: rsphome
namespace: rook-ceph
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
quotas:
maxSize: 10Gi
dataPools:
- name: default
failureDomain: host
replicated:
size: 3
quotas:
maxSize: 250Gi
- name: ec
failureDomain: host
erasureCoded:
dataChunks: 6
codingChunks: 3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems large for a 10 node cluster with fault tolerance of 2. 6+2 is probably safer. I wonder if 5+3 is a good idea?

quotas:
maxSize: 250Gi
metadataServer:
activeCount: 3
activeStandby: true
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: "4"
memory: 4Gi
preserveFilesystemOnDelete: false
---
apiVersion: ceph.rook.io/v1
kind: CephNFS
metadata:
name: rsphome
namespace: rook-ceph
spec:
rados:
pool: rsphome-data0
# RADOS namespace where NFS client recovery data is stored in the pool.
namespace: nfs-ns
server:
active: 1
resources:
limits:
cpu: "3"
memory: 8Gi
requests:
cpu: "3"
memory: 8Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rook-ceph-nfs
ceph_daemon_type: nfs
ceph_nfs: rsphome
instance: a
rook_cluster: rook-ceph
name: rook-ceph-nfs-rsphome
namespace: rook-ceph
annotations:
metallb.universe.tf/loadBalancerIPs: 139.229.151.163
spec:
ports:
- name: nfs
port: 2049
protocol: TCP
targetPort: 2049
selector:
app: rook-ceph-nfs
ceph_daemon_type: nfs
ceph_nfs: rsphome
instance: a
rook_cluster: rook-ceph
type: LoadBalancer
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,10 @@ data:
ceph nfs export rm comcam /comcam
ceph nfs export create cephfs comcam /comcam comcam

waitfornfs rsphome
ceph nfs export rm rsphome /rsphome
ceph nfs export create cephfs rsphome /rsphome rsphome

ceph mgr module enable rook
ceph orch set backend rook
ceph device monitoring on
Expand Down
Loading