diff --git a/v2.2.x/site/en/getstarted/prerequisite-docker.md b/v2.2.x/site/en/getstarted/prerequisite-docker.md
index c7cf86ce2..343397470 100644
--- a/v2.2.x/site/en/getstarted/prerequisite-docker.md
+++ b/v2.2.x/site/en/getstarted/prerequisite-docker.md
@@ -34,7 +34,7 @@ Before you install Milvus, check your hardware and software to see if they meet
### Additional disk requirements
-Disk performance is critical to etcd. It is highly recommended that you use local NVMe SSDs. Slower disk reponse may cause frequent cluster elections that will eventually degrade the etcd service.
+Disk performance is critical to etcd. It is highly recommended that you use local NVMe SSDs. Slower disk response may cause frequent cluster elections that will eventually degrade the etcd service.
To test if your disk is qualified, use [fio](https://github.com/axboe/fio).
diff --git a/v2.2.x/site/en/reference/schema.md b/v2.2.x/site/en/reference/schema.md
index 95bb16cfe..1ce8b25b7 100644
--- a/v2.2.x/site/en/reference/schema.md
+++ b/v2.2.x/site/en/reference/schema.md
@@ -129,7 +129,7 @@ A collection schema is the logical definition of a collection. Usually you need
enable_dynamic_field |
Whether to enable dynamic schema or not |
- Data type: Boolean (true or false ). Optional, defaults to False For details on dynamic schema, refer to Dynamic Schema and the user guides for managing collections. |
+ Data type: Boolean (true or false ). Optional, defaults to False . For details on dynamic schema, refer to Dynamic Schema and the user guides for managing collections. |
diff --git a/v2.2.x/site/en/userGuide/bulk_insert.md b/v2.2.x/site/en/userGuide/bulk_insert.md
index 5d428d017..cb57a2600 100644
--- a/v2.2.x/site/en/userGuide/bulk_insert.md
+++ b/v2.2.x/site/en/userGuide/bulk_insert.md
@@ -77,7 +77,7 @@ arr = numpy.array([json.dumps({"year": 2015, "price": 23.43}),
json.dumps({"year": 2018, "price": 15.05}),
json.dumps({"year": 2020, "price": 36.68}),
json.dumps({"year": 2019, "price": 20.14}),
- json.dumps({"year": 2021, "price": 9.36}))
+ json.dumps({"year": 2021, "price": 9.36})])
numpy.save('book_props.npy', arr)
```
@@ -85,10 +85,14 @@ numpy.save('book_props.npy', arr)
You can also add dynamic fields using NumPy files as follows. For details on dynamic fields, refer to [Dynamic Schema](dynamic_schema.md).
-```
+
+
+```python
numpy.save('$meta.py', numpy.array([ json.dumps({x: 2}), json.dumps({y: 8, z: 2}) ]))
```
+
+
- Use the field name of each column to name the NumPy file. Do not add files named after a field that does not exist in the target collection. There should be one NumPy file for each field.
@@ -209,6 +213,7 @@ In the flavor of PyMilvus, you can use [`get_bulk_insert_state()`](https://milvu
```python
+from pymilvus import utility, BulkInsertState
task = utility.get_bulk_insert_state(task_id=task_id)
print("Task state:", task.state_name)
print("Imported files:", task.files)
@@ -329,7 +334,7 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
import numpy as np
data = [True, False, True, False]
dt = np.dtype('bool', (len(data)))
@@ -343,7 +348,7 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
import numpy as np
data = [1, 2, 3, 4]
dt = np.dtype('int8', (len(data)))
@@ -357,7 +362,7 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
import numpy as np
data = [1, 2, 3, 4]
dt = np.dtype('int16', (len(data)))
@@ -371,7 +376,7 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
import numpy as np
data = [1, 2, 3, 4]
dt = np.dtype('int32', (len(data)))
@@ -385,7 +390,7 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
import numpy as np
data = [1, 2, 3, 4]
dt = np.dtype('int64', (len(data)))
@@ -399,7 +404,7 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
import numpy as np
data = [0.1, 0.2, 0.3, 0.4]
dt = np.dtype('float32', (len(data)))
@@ -413,7 +418,7 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
import numpy as np
data = [0.1, 0.2, 0.3, 0.4]
dt = np.dtype('float64', (len(data)))
@@ -426,7 +431,8 @@ The following examples demonstrate how to create NumPy files for columns of data
- Create a NumPy file from a VARCHAR array
- ```
+ ```python
+ import numpy as np
data = ["a", "b", "c", "d"]
arr = np.array(data)
np.save(file_path, arr)
@@ -440,7 +446,8 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
+ import numpy as np
data = [
[43, 35, 124, 90],
[65, 212, 12, 57],
@@ -462,7 +469,8 @@ The following examples demonstrate how to create NumPy files for columns of data
- ```
+ ```python
+ import numpy as np
data = [
[1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8],
[2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8],
@@ -499,7 +507,7 @@ You can create multiple data-import tasks as follows
-```
+```python
task_1 = utility.do_bulk_insert(
collection_name="book",
files=["task_1/book_id.npy", "task_1/word_count.npy", "task_1/book_intro.npy", "task_1/book_props.npy"]
@@ -522,7 +530,7 @@ PyMilvus provides a utility method to wait for the index-building process to com
-```
+```python
utility.wait_for_index_building_complete(collection_name)
```
diff --git a/v2.2.x/site/en/userGuide/create_collection.md b/v2.2.x/site/en/userGuide/create_collection.md
index da64abf01..881c7a595 100644
--- a/v2.2.x/site/en/userGuide/create_collection.md
+++ b/v2.2.x/site/en/userGuide/create_collection.md
@@ -196,7 +196,7 @@ Output:
- FieldSchema |
+ FieldSchema |
Schema of the fields within the collection to create. Refer to Schema for more information. |
N/A |
@@ -258,7 +258,7 @@ Output:
N/A |
- CollectionSchema |
+ CollectionSchema |
Schema of the collection to create. Refer to Schema for more information. |
N/A |
@@ -272,6 +272,11 @@ Output:
Description of the collection to create. |
N/A |
+
+ enable_dynamic_field |
+ Whether to enable dynamic schema or not |
+ Data type: Boolean (true or false ). Optional, defaults to False . For details on dynamic schema, refer to Dynamic Schema and the user guides for managing collections. |
+
collection_name |
Name of the collection to create. |
diff --git a/v2.2.x/site/en/userGuide/drop_index.md b/v2.2.x/site/en/userGuide/drop_index.md
index d76d2dd46..c2e5412e2 100644
--- a/v2.2.x/site/en/userGuide/drop_index.md
+++ b/v2.2.x/site/en/userGuide/drop_index.md
@@ -6,7 +6,7 @@ summary: Learn how to drop an index in Milvus.
# Drop an Index
-This topic describes how to drop an index in Milvus.
+This topic describes how to drop an index in Milvus. Before dropping an index, make sure to release it first.
Dropping an index irreversibly removes all corresponding index files.
diff --git a/v2.2.x/site/en/userGuide/drop_partition.md b/v2.2.x/site/en/userGuide/drop_partition.md
index 285a30d06..0fb6e5fe8 100644
--- a/v2.2.x/site/en/userGuide/drop_partition.md
+++ b/v2.2.x/site/en/userGuide/drop_partition.md
@@ -8,13 +8,15 @@ summary: Learn how to drop a partition in Milvus.
This topic describes how to drop a partition in a specified collection.
-
-- You have to release the partition before you drop it.
-- Dropping a partition irreversibly deletes all data within it.
+
+ - You have to release the partition before you drop it.
+ - Dropping a partition irreversibly deletes all data within it.
+
+
Python
Java
diff --git a/v2.2.x/site/en/userGuide/load_partition.md b/v2.2.x/site/en/userGuide/load_partition.md
index a3e049d88..37f6d6c78 100644
--- a/v2.2.x/site/en/userGuide/load_partition.md
+++ b/v2.2.x/site/en/userGuide/load_partition.md
@@ -8,7 +8,7 @@ summary: Learn how to load a partition into memory for search or query in Milvus
This topic describes how to load a partition to memory. Loading partitions instead of the whole collection to memory can significantly reduce the memory usage. All search and query operations within Milvus are executed in memory.
-Milvus 2.1 allows users to load a partition as multiple replicas to utilize the CPU and memory resources of extra query nodes. This feature boost the overall QPS and throughput with extra hardware. It is supported on PyMilvus in current release.
+Milvus 2.1 or later allows users to load a partition as multiple replicas to utilize the CPU and memory resources of extra query nodes. This feature boost the overall QPS and throughput with extra hardware. It is supported on PyMilvus in current release.