-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update scheduling-overview.md , tiflash-intro.md and tispark-intro.md #804
Conversation
非常抱歉,出现了冲突,麻烦能够先解决一下冲突么 |
我已经解决了冲突 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 注意中英文空格规范
- 调度那一块根据需求决定是否修改,只是建议
- 建议将这三个不相关的文件修改分为三个 PR 来做, 每个 PR 解决一个问题,也是为了帮助 PR 能够更快地被合进去。
|
||
* 如何保证同一个 Region 的多个 Replica 分布在不同的节点上?更进一步,如果在一台机器上启动多个 TiKV 实例,会有什么问题? | ||
* TiKV 集群进行跨机房部署用于容灾的时候,如何保证一个机房掉线,不会丢失 Raft Group 的多个 Replica? | ||
* 添加一个节点进入 TiKV 集群之后,如何将集群中其他节点上的数据搬过来? | ||
* 当一个节点掉线时,会出现什么问题?整个集群需要做什么事情?如果节点只是短暂掉线(重启服务),那么如何处理?如果节点是长时间掉线(磁盘故障,数据全部丢失),需要如何处理?假设集群需要每个 Raft Group 有 N 个副本,那么对于单个 Raft Group 来说,Replica 数量可能会不够多(例如节点掉线,失去副本),也可能会 过于多(例如掉线的节点又回复正常,自动加入集群)。那么如何调节 Replica 个数? | ||
* 当一个节点掉线时,会出现什么问题?整个集群需要做什么事情?如果节点只是短暂掉线(重启服务),那么如何处理?如果节点是长时间掉线(磁盘故障,数据全部丢失),需要如何处理?假设集群需要每个 Raft Group 有 N 个副本,那么对于单个 Raft Group 来说,Replica 数量可能会不够多(例如:节点掉线,失去副本),也可能会过于多(例如:掉线的节点又恢复正常,自动加入集群)。那么如何调节 Replica 个数? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这一部分改得很细心很好,但作为读者,通篇读下来似乎每个问题都很有道理,但还是一个流水账想到什么说什么的感觉,说实话原文写得有点啰嗦重复呢?
这边布局的时候,是否可以向写代码一样,逻辑更清晰一点,比方说把它分为几个清晰的点?每句话一个问题?后面的小结就会很好写,每个小结可以说解决了其中什么问题?
比方说:
- 如何容灾
- 。。
- 负载均衡
- 。。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我建议征求这一节建立者的意见比较好。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不用担心太多呢。。这个 PR 就先合了。审核跟重构一样,重在让逻辑更清晰,更好读。后面审核可以从读者的角度,怎么好读怎么改哈,我们后面也会考虑邀请原文作者来一起 review 呢
Co-Authored-By: Shirly <[email protected]>
Co-Authored-By: Shirly <[email protected]>
Co-Authored-By: Shirly <[email protected]>
Co-Authored-By: Shirly <[email protected]>
Co-Authored-By: Shirly <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Related issue:
更“新调度概述”中的一些文字错误和部分不通顺的句子:#667
补充了 “9 TiFlash 简介与 HTAP 实战”的章节介绍:#682 .
补充了“11 TiSpark 简介与实战”的章节介绍:#690