-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add curvefs client config #47
Conversation
@@ -0,0 +1,57 @@ | |||
# 简要说明 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
全部改为二级标题吧,否则渲染失败。
一级标题是对应的文章题目,也就是文档网站上左边目录栏的文章链接标题,也要加一个。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
|
||
| 名称 | 默认值 | 说明 | | ||
| --- | --- | --- | | ||
| kind | curvefs | 所属的是fs还是bs集群 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kind这是啥配置文件里面的?是curveadm的topo yaml文件?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
看明白了,你这是针对curveadm的客户端配置,那说明清楚吧,具体是谁用到的配置文件。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
| s3.bucket_name | | s3信息 | | ||
|
||
|
||
# 缓存盘配置项 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
memory cache缓存感觉也可以写一下
|
||
# 缓存盘配置项 | ||
|
||
如果没有本地缓存盘,那么client的读写便会直接与s3交互,所以一般建议借助缓存盘来提升读写能力(一般使用ssd或者高性能云盘,不建议使用hdd)。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果没有本地缓存盘,那么client的读写便会直接与s3交互,而s3的访问时延和吞吐均受限,所以一般建议借助缓存盘来提升读写能力,一般推荐使用SSD/NVMe盘或者高性能云盘做缓存盘,不建议使用HDD盘,另外由于缓存盘会占用大量空间和吞吐,因此也不建议与系统盘共用。缓存盘的容量越大缓存数据越多性能越好,可根据需要缓存的数据量来配置缓存盘容量,建议至少500G。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 另外加一段话写清楚如何准备缓存盘吧,马杰就加了一段描述,比如需要独立的硬盘,格式化,挂载目录。
- 另外还需要说明如果一个节点上有多个client共享缓存盘该如何配置。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
|
||
- 写缓存 | ||
|
||
client写入过程中,如果可使用缓存,那么便会先把s3对象写入本地缓存盘,然后再异步写入s3 (注意:数据重要场景不建议直接使用单硬盘作为写缓存,存在数据丢失风险。可本地raid1或者使用云盘作为本地缓存盘) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://juicefs.com/docs/zh/community/guide/cache/#client-write-cache
这部分可以参考下Juicefs的写法,他们的文档写的很详细,描述也很规范。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
他们缺少客户端进程或者节点重启且客户端进程也被拉起后的写缓存数据会继续上传的描述,可以补充下。
如果更换了客户端运行的节点,本地缓存盘的写缓存数据会丢失,这一点也要指明。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
6d55ff7
to
b1b1e37
Compare
## 缓存盘配置项 | ||
|
||
如果没有本地缓存盘,那么client的读写便会直接与s3交互,而s3的访问时延和吞吐均受限,所以一般建议借助缓存盘来提升读写能力,一般推荐使用SSD/NVMe盘或者高性能云盘做缓存盘,不建议使用HDD盘,另外由于缓存盘会占用大量空间和吞吐,因此也不建议与系统盘共用(不过本地缓存盘可设置QOS)。缓存盘的容量越大缓存数据越多性能越好,可根据需要缓存的数据量来配置缓存盘容量,建议至少100G。缓存盘用简单,格式化对应硬盘,然后挂载后可以使用该路径作为本地缓存路径配置到配置项`diskCache.cacheDir`。若是多个Client共用一个本地缓存盘,那么可以通过不同路径名来配置,比如client1的配置项为`diskCache.cacheDir=/mnt/cache/client1`,client配置项为 `diskCache.cacheDir=/mnt/cache/client2`。 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
缓存盘用简单,笔误吧?
不过本地缓存盘可设置QOS,太口语化了,建议AI改写下吧。。。
QOS可以介绍下怎么设置吧?比如如何限制bps、iops
Signed-off-by: Wangpan <[email protected]>
No description provided.