-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Q10Viking
committed
Mar 25, 2024
1 parent
5ccf8a9
commit 81f7cb4
Showing
20 changed files
with
891 additions
and
0 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
--- | ||
sidebarDepth: 3 | ||
sidebar: auto | ||
prev: | ||
text: Back To 目录 | ||
link: /MySQL/ | ||
typora-root-url: ..\.vuepress\public | ||
--- | ||
|
||
|
||
|
||
**MySQL 的主从复制原理如下:** | ||
|
||
首先,主库将变更写入 binlog 日志。 | ||
|
||
从库连接到主库后,有一个 IO 线程负责将**主库的 binlog 日志**复制到自己本地,并写入到**中继日志**(Relay Log)中。 | ||
|
||
然后,从库中有一个 SQL 线程会从中继日志读取 binlog,并执行其中的 SQL 内容,即在从库上再次执行一遍。 | ||
|
||
以上就是主从复制的原理。那么**主从延迟的原因有哪些呢?** | ||
|
||
1. 主库的从库太多,主库需要将 binlog 日志传输给多个从库,导致复制延迟。 | ||
2. 在从库执行的 SQL 中存在慢查询语句,会导致整体复制进程的延迟。 | ||
3. 如果主库的读写压力过大,会导致主库处理 binlog 的速度减慢,进而影响复制延迟。 | ||
|
||
**为了优化主从复制的延迟,我们可以采取以下措施:** | ||
|
||
1. 减少从库的数量,降低主库的负载,减少复制延迟。 | ||
2. 优化慢查询语句,减少从库执行SQL 的延迟。 | ||
3. 对主库进行性能优化,减少主库的读写压力,提高 binlog 写入速度。 | ||
|
||
通过以上措施可以帮助降低主从复制的延迟,提高复制的效率和一致性。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,216 @@ | ||
--- | ||
sidebarDepth: 3 | ||
sidebar: auto | ||
prev: | ||
text: Back To 目录 | ||
link: /MySQL/ | ||
typora-root-url: ..\.vuepress\public | ||
--- | ||
|
||
|
||
|
||
## **如何优化深分页limit 1000000** | ||
|
||
深分页问题是 MySQL 中常见的性能问题,当你尝试获取大量数据的后续页面时,性能会显著下降。这是因为 MySQL 需要先扫描到指定的偏移量,然后再返回数据。 | ||
|
||
例如,以下查询可能会非常慢: | ||
|
||
```sql | ||
SELECT * FROM table ORDER BY id LIMIT 1000000, 10; | ||
``` | ||
|
||
这是因为 MySQL 需要先扫描 1000000 行数据,然后再返回后面的 10 行数据。 | ||
|
||
解决深分页问题的常见方法有以下几种: | ||
|
||
1. **使用覆盖索引:** 覆盖索引可以让 MySQL 在索引中获取所有需要的数据,而无需回表查询。这可以大大提高查询速度。 | ||
|
||
```sql | ||
SELECT id FROM table ORDER BY id LIMIT 1000000, 10; | ||
``` | ||
|
||
2. **记住上次的位置:** 如果你的应用程序可以记住上次查询的最后一个 ID,那么你可以使用 WHERE 子句来避免扫描大量数据。 | ||
|
||
```sql | ||
SELECT * FROM table WHERE id > last_id ORDER BY id LIMIT 10; | ||
``` | ||
|
||
3. **使用分页插件:** 有些数据库框架提供了分页插件,可以自动优化分页查询。 | ||
|
||
4. **避免深分页:** 在设计应用程序时,尽量避免深分页。例如,你可以提供搜索功能,让用户快速找到他们需要的数据,而不是浏览大量的页面。 | ||
|
||
|
||
|
||
## 实战 | ||
|
||
### 数据准备 | ||
|
||
```sql | ||
-- 1.创建表: | ||
drop table user_login_log; | ||
|
||
CREATE TABLE user_login_log ( | ||
id INT PRIMARY KEY AUTO_INCREMENT, | ||
user_id VARCHAR(64) NOT NULL, | ||
ip VARCHAR(20) NOT NULL, | ||
attr1 VARCHAR(255), | ||
attr2 VARCHAR(255), | ||
attr3 VARCHAR(255), | ||
attr4 VARCHAR(255), | ||
attr5 VARCHAR(255), | ||
attr6 VARCHAR(255), | ||
attr7 VARCHAR(255), | ||
attr8 VARCHAR(255), | ||
attr9 VARCHAR(255), | ||
attr10 VARCHAR(255) | ||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; | ||
|
||
-- 2.mock数据: | ||
-- 创建存储过程 | ||
DELIMITER $$ | ||
CREATE PROCEDURE insert_mock_data(IN n INT) | ||
BEGIN | ||
DECLARE i INT DEFAULT 0; | ||
set autocommit = 0; | ||
WHILE i < n DO | ||
INSERT INTO user_login_log(user_id, ip, attr1, attr2, attr3, attr4, attr5, attr6, attr7, attr8, attr9, attr10) | ||
VALUES ( | ||
CONCAT('user_', FLOOR(RAND() * 10000)), | ||
CONCAT(FLOOR(RAND() * 256), '.', FLOOR(RAND() * 256), '.', FLOOR(RAND() * 256), '.', FLOOR(RAND() * 256)), | ||
CONCAT('attr1_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr2_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr3_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr4_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr5_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr6_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr7_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr8_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr9_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm'), | ||
CONCAT('attr10_', 'ZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSmZPdUqUBYmoJJakYmoLNJTyMnfOBpXTBbDKOSUWYfCxFJFakYoyCqXNZJkhfeizXsSm') | ||
); | ||
if i % 1000 = 0 then | ||
commit; | ||
end if; | ||
SET i = i + 1; | ||
END WHILE; | ||
END$$ | ||
DELIMITER ; | ||
|
||
-- 生成随机数 | ||
-- 为了提升mock数据的效率,这里把额外字段数据写死了。 | ||
-- 如果想修改成随机数,将第二个参数换成rand_string(66),也就是改成 CONCAT('attr*_', rand_string(66)) 就可以啦。 | ||
CREATE FUNCTION rand_string(n INT) | ||
RETURNS VARCHAR(255) DETERMINISTIC NO SQL | ||
BEGIN | ||
DECLARE chars_str VARCHAR(100) DEFAULT 'abcdefghijklmnopqrstuvwxyzABCDEFJHIJKLMNOPQRSTUVWXYZ'; | ||
DECLARE return_str VARCHAR(255) DEFAULT ''; | ||
DECLARE i INT DEFAULT 0; | ||
WHILE i < n DO | ||
SET return_str = CONCAT(return_str, SUBSTRING(chars_str,FLOOR(1+RAND()*52),1)); | ||
SET i = i + 1; | ||
END WHILE; | ||
RETURN return_str; | ||
END; | ||
|
||
-- 调用存储过程插入1000万条数据 | ||
CALL insert_mock_data(10000000); | ||
``` | ||
|
||
### 普通分页查询 | ||
|
||
MySQL通过Limit关键字实现分页查询,语法如下: | ||
|
||
```sql | ||
SELECT column_name(s) FROM table_name Limit offset, row_count; | ||
``` | ||
|
||
``` | ||
limit是mysql的语法 | ||
select * from table limit m,n | ||
其中m是指记录开始的index,从0开始,表示第一条记录 | ||
n是指从第m+1条开始,取n条。 | ||
select * from tablename limit 2,4 | ||
即取出第3条至第6条,4条记录 | ||
``` | ||
|
||
|
||
|
||
### 相同偏移量,不同数据量 | ||
|
||
```sql | ||
SELECT * FROM user_login_log LIMIT 10000, 100; | ||
SELECT * FROM user_login_log LIMIT 10000, 1000; | ||
SELECT * FROM user_login_log LIMIT 10000, 10000; | ||
SELECT * FROM user_login_log LIMIT 10000, 100000; | ||
SELECT * FROM user_login_log LIMIT 10000, 1000000; | ||
``` | ||
|
||
> 从上面结果可以得出结论:数据量越大,花费时间越长 | ||
![image-20240325113524726](/images/MySQL/image-20240325113524726.png) | ||
|
||
|
||
|
||
#### 优化 | ||
|
||
明确查询字段,避免使用select *,减少MySQL优化器负担。 | ||
|
||
```sql | ||
-- 避免使用select * | ||
select user_id, ip, attr1, attr2, attr3, attr4, attr5, attr6, attr7, attr8, attr9, attr10 from user_login_log LIMIT 10000, 100000; | ||
``` | ||
|
||
按需查找字段,减少网络IO消耗。 | ||
|
||
```sql | ||
-- 按需查找字段 | ||
SELECT id FROM user_login_log LIMIT 10000, 100000; | ||
SELECT user_id FROM user_login_log LIMIT 10000, 100000; | ||
``` | ||
|
||
查询字段索引覆盖,通过辅助索引提升查询效率 | ||
|
||
```sql | ||
-- 覆盖索引 | ||
ALTER TABLE user_login_log ADD index idx_user_id (user_id); | ||
SELECT user_id FROM user_login_log LIMIT 10000, 100000; | ||
|
||
alter TABLE user_login_log drop index idx_user_id; | ||
``` | ||
|
||
针对数据量大的情况,我们可以做如下优化: | ||
|
||
- 按需查询字段,减少网络IO消耗 | ||
|
||
- 避免使用select *,减少MySQL优化器负担 | ||
|
||
- 查询的字段尽量保证索引覆盖 | ||
|
||
- 借助nosql缓存数据缓解MySQL数据库的压力 | ||
|
||
### 相同数据量,不同偏移量 | ||
|
||
```sql | ||
SELECT * FROM user_login_log LIMIT 100, 100; | ||
SELECT * FROM user_login_log LIMIT 1000, 100; | ||
SELECT * FROM user_login_log LIMIT 10000, 100; | ||
SELECT * FROM user_login_log LIMIT 100000, 100; | ||
SELECT * FROM user_login_log LIMIT 1000000, 100; | ||
``` | ||
|
||
>从上面结果可以得出结论:偏移量越大,花费时间越长 | ||
![image-20240325113640086](/images/MySQL/image-20240325113640086.png) | ||
|
||
#### 优化 | ||
|
||
**偏移量大的场景我们也可以使用数据量大的优化方案,除此之外还可以将偏移量改为使用Id限定的方式提升查询效率** | ||
|
||
```sql | ||
-- 增加索引where条件,缩减数据范围 | ||
SELECT * FROM user_login_log where id > 1000000 LIMIT 100; | ||
``` | ||
|
||
针对偏移量越大的情况,我们可以做如下优化: | ||
|
||
- 添加where条件缩减扫描条数,然后limit关键再进行数据筛选(使用索引字段进行条件过滤) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
--- | ||
sidebarDepth: 3 | ||
sidebar: auto | ||
prev: | ||
text: Back To 目录 | ||
link: /MySQL/ | ||
typora-root-url: ..\.vuepress\public | ||
--- | ||
|
||
|
||
|
||
> 具体查看锁详情章节 | ||
MySQL在并发环境下可能会出现死锁问题。死锁是指两个或多个事务互相等待对方释放资源,导致无法继续执行的情况。 | ||
|
||
解决死锁问题的方法通常有以下几种: | ||
|
||
1. **调整事务隔离级别:**通过将事务隔离级别降低为读未提交(或读已提交,可以减少死锁的发生概率。但是要注意隔离级别的降低可能引发脏读、不可重复读等数据一致性问题,在选择时需要权衡利弊。 | ||
2. **优化查询和事务逻辑:**分析造成死锁的原因,优化查询语句和事务逻辑,尽量缩短事务持有锁的时间,减少死锁的可能性。比如按照相同的顺序获取锁,避免跨事务的循环依赖等。 | ||
3. **使用行级锁:**行级锁可以较小地限制锁的范围,从而减少死锁的可能性。将表的锁粒度调整为行级别,可以减少事务之间的冲突。 | ||
4. **设置合理的超时时间和重试机制:**当发生死锁时,可以设置适当的超时时间,在一定时间内尝试解锁操作。如果超过设定的时间仍未成功,则进行死锁处理,如终止较早请求的事务或进行回滚等。 | ||
|
||
需要根据具体情况分析和实施相应的解决方案,并进行测试和验证,以确保解决死锁问题,并提高数据库的并发性能。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
--- | ||
sidebarDepth: 3 | ||
sidebar: auto | ||
prev: | ||
text: Back To 目录 | ||
link: /MySQL/ | ||
typora-root-url: ..\.vuepress\public | ||
--- | ||
|
||
|
||
|
||
1. **良好的平衡性:**B+树是一种自平衡的树结构,不论是在插入、删除还是查询操作中,它都能保持相对较好的平衡状态。这使得B+树能够快速定位到目标数据,提高查询效率。 | ||
2. **顺序访问性:**B+树的所有叶子节点是按照索引键的顺序排序的。这使得范围查询和顺序访问非常高效,因为相邻的数据通常在物理上也是相邻存储的,可以利用磁盘预读提高IO效率。 | ||
3. **存储效率:**B+树在内存中的节点大小通常比其他树结构更大,这样可以减少磁盘I/O操作的次数。同时,B+树的非叶子节点只存储索引列的值,而不包含实际数据,这进一步减小了索引的尺寸。 | ||
4. **支持高并发:**B+树的特性使得它能够支持高并发的读写操作。通过使用合适的锁或事务隔离级别,多个并发查询和更新操作可以同时进行而不会出现严重的阻塞或冲突。 | ||
5. **易于扩展和维护:**B+树的结构相对简单,可以较容易地进行扩展和维护。当插入或删除数据时,B+树只需要调整路径上的少数节点,而不需要整颗树的重构。这样能够有效降低维护成本,并保证索引的高性能。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
--- | ||
sidebarDepth: 3 | ||
sidebar: auto | ||
prev: | ||
text: Back To 目录 | ||
link: /MySQL/ | ||
typora-root-url: ..\.vuepress\public | ||
--- | ||
|
||
|
||
|
||
MySQL的Binlog有三种录入格式,分别是**Statement格式**、**Row格式**和**Mixed格式**。 | ||
|
||
### **Statement格式:** | ||
|
||
- 将SQL语句本身记录到Binlog中。 | ||
- 记录的是在主库上执行的SQL语句,从库通过解析并执行相同的SQL来达到复制的目的。 | ||
- 简单、易读,节省存储空间。 | ||
- 但是,在某些情况下,由于执行计划或函数等因素的影响,相同的SQL语句在主从库上执行结果可能不一致,导致复制错误。 | ||
|
||
### **Row格式:** | ||
|
||
- 记录被修改的每一行数据的变化。 | ||
- 不记录具体的SQL语句,而是记录每行数据的变动情况,如插入、删除、更新操作前后的值。 | ||
- 保证了复制的准确性,不受SQL语句执行结果的差异影响,适用于任何情况。 | ||
- 但是,相比Statement格式,Row格式会占用更多的存储空间。 | ||
|
||
### Mixed格式: | ||
● Statement格式和Row格式的结合,MySQL自动选择适合的格式。 | ||
● 大多数情况下使用Statement格式进行记录,但对于无法保证安全复制的情况,如使用非确定性函数、触发器等,会自动切换到Row格式进行记录。 | ||
● 结合了两种格式的优势,既减少了存储空间的占用,又保证了复制的准确性。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
--- | ||
sidebarDepth: 3 | ||
sidebar: auto | ||
prev: | ||
text: Back To 目录 | ||
link: /MySQL/ | ||
typora-root-url: ..\.vuepress\public | ||
--- | ||
|
||
|
||
|
||
|
||
|
||
## 表 | ||
|
||
- **水平切分:**水平切分又称为 Sharding,它是将同一个表中的记录拆分到多个结构相同的表中。当一个表的数据不断增多时,Sharding 是必然的选择,它可以将数据分布到集群的不同节点上,从而缓存单个数据库的压力。 | ||
- **垂直切分:**垂直切分是将一张表按列切分成多个表,通常是按照列的关系密集程度进行切分,也可以利用垂直切分将经常被使用的列和不经常被使用的列切分到不同的表中。 | ||
|
||
## 分库分表面临的问题 | ||
|
||
1. **数据一致性:**由于数据被分布到不同的数据库和表中,分库分表涉及跨节点的事务,需要确保数据的一致性。可以采用两阶段提交(2PC)协议、最终一致性方案或者基于分布式事务的工具来管理分布式事务,确保数据的一致性。 | ||
2. **跨分片查询:**当业务需要跨多个分片进行查询时,可能会面临性能问题和复杂的查询逻辑。可以使用分布式查询引擎、数据聚合、缓存和分布式计算框架等技术来处理跨分片查询需求,提高查询效率和简化查询逻辑。 | ||
3. **全局唯一性约束:**在分库分表环境下,全局唯一性约束可能受到挑战。可以采用分布式唯一ID生成器(如Snowflake算法)来生成全局唯一ID,避免冲突。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
sidebarDepth: 3 | ||
sidebar: auto | ||
prev: | ||
text: Back To 目录 | ||
link: /MySQL/ | ||
typora-root-url: ..\.vuepress\public | ||
--- | ||
|
||
|
||
|
||
1. **乐观锁:在数据表中添加一个版本号(或者时间戳)字段,每次更新数据时都会检查该字段的值。**当多个并发的请求同时修改同一行数据时,只有一个请求能够成功执行更新操作,其他请求需要重新检查数据是否被修改过。如果数据没有被修改,那么它们可以重新尝试更新;如果数据已经被修改,则这些请求需要触发重试等相应的冲突处理逻辑。 | ||
2. **悲观锁:**在读取数据之前,使用数据库提供的锁机制,如SELECT ... FOR UPDATE语句,将要修改的行数据进行加锁。这样,其他并发的请求在读取相同行数据时会被阻塞,直到锁释放。这种方法能够确保同一时间只有一个请求在修改数据,但是会影响系统的并发性能。 | ||
3. **分布式锁:**通过使用分布式锁服务,如Redis的SETNX命令或ZooKeeper的临时节点,来实现对行级数据的互斥访问。在修改数据前先尝试获取锁,获取成功后执行数据修改操作,修改完成后释放锁。其他请求在获取锁失败时可以等待或执行相应的冲突处理逻辑。 | ||
4. **事务:**将对同一行数据的修改操作封装在数据库事务中。在事务中,数据库会自动处理并发修改的冲突,通过锁定相应的数据行来确保数据的一致性和完整性。并发的请求会被串行化执行,保证每个请求都能正确读取并修改数据。 | ||
|
||
|
||
|
Oops, something went wrong.