forked from ssinger/slony1-engine
-
Notifications
You must be signed in to change notification settings - Fork 1
/
RELEASE
183 lines (143 loc) · 8.22 KB
/
RELEASE
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
#+OPTIONS: ^:{}
* Slony-I 2.2 Release Notes
** Significant Changes
- Shared Libraries are now named so as to allow having multiple
versions of Slony installed simultaneously on a database cluster
- Bug 287 :: Fix UPDATE FUNCTIONS so it works when the old shared
library is no longer around
- The slony1_funcs.so and .sql files that get installed are also
versioned, which allows multiple versions of slony to be
installed in the same postgresql lib directory.
- Allow linking with pgport on 9.3+ by including pgcommon
- Let DDL for EXECUTE SCRIPT to be specified inline - bug 151
- Numerous revisions to documentation
- altperl tool improvements including ::
- better support for running multiple slons on the same server
- let start_slon invoke slon with a config file
- fix node_is_subscribing
- $LOGDIR directory behaviours
- use PGPASSWORDFILE when querying state
- Add correct 'ps' arguments to the altperl tools for Darwin.
- SYNC GROUP sizes now dynamically grow from 1 to a maximum size
set in the config file. The old logic based on time it took to
complete the last SYNC has been removed. - bug #235
- Added a RESUBSCRIBE NODE command to reshape clusters. This must
be used instead of SUBSCRIBE SET to reshape subscriptions when an
origin has more than one set.
- Bug #178 :: The FAILOVER process has been reworked to be more
reliable. Some nodes can no longer be used as
failover targets.
- SET ID is no longer an option to EXECUTE SCRIPT.
- Major "protocol" change; rather than constructing cursors to
query logged updates, version 2.2 now uses the COPY protocol to
stream data into log tables on subscribers. This is discussed in
greater detail in the following section.
** Major 2.2 change: COPY protocol
In versions before 2.2, Slony-I log triggers would capture logged
data into tables ~sl_log_1~ and ~sl_log_2~ in the form of
nearly-"cooked" queries, omitting just the INSERT/UPDATE/DELETE
statement, and the slon that processes the data would transform
the data into INSERT/UPDATE/DELETE statements to be individually
parsed and executed by the subscriber node.
In version 2.2, this is changed fairly massively.
- Log triggers
- Log triggers used to capture the text of the SQL statement to
perform the INSERT/UPDATE/DELETE, missing only the literal
INSERT (or UPDATE or DELETE) and the name of the table.
- As of version 2.2, the log triggers now capture an array of
text values indicating the criteria and the data to be loaded.
- Query against data provider
- The old query process used to involve declaring a cursor
called LOG, which would pull in all the tuples where the data
attribute was not "too terribly large." Tuples would be
pulled in a few hundred at a time, the unusually large ones
being handled via a special side process in order to avoid
excessive memory consumption.
- In version 2.2, the query instead runs a COPY to standard
output, leaving a pipe open to capture that output for
transmission to the subscriber.
- Loading data into subscriber
- The old query process involved explicitly running each INSERT,
UPDATE, DELETE, and TRUNCATE against the subscriber. This
required that the backend parse, plan, and process each query
individually.
- In version 2.2, the stream of data captured from the COPY
request is, in turn, passed to a COPY statement that loads the
data into sl_log_1 or sl_log_2 on the subscriber node.
A trigger on sl_log_1 and sl_log_2 reads the incoming data
and, inside the backend, transforms the array into
INSERT/UPDATE/DELETE statements.
This is still an oversimplification; there is a statement
cache so that if there is request to ~INSERT INTO some_table
(c1,c2) values ('b','c');~ followed (perhaps, but not
necessarily immediately) by ~INSERT INTO some_table (c1,c2)
values ('d','e');~, then the second request re-uses the plan
from the first query, and may instead execute ~EXECUTE
some_table_insert('d','e');~
- DDL handling
- The old behaviour was that DDL was treated as a special Slony
event, and it never was clear whether that should be executed
before or after other replication activity that might have
taken place during that Slony event. This was fine in Slony
1.2 and earlier, when the processing of DDL forcibly required
that Slony take out locks on the origin on ALL replicated
tables; those locks would prevent there from being any other
activity going on during the processing of DDL. But when much
of that locking disappeared, in version 2.0, it became
possible for there to be replication activity on tables not
being modified by DDL, and timing anomalies could occur.
- In version 2.2, a third log table, sl_log_script, is added to
capture script queries which would notably include DDL. This
table uses the same sequence value that controls order of
application for replicated data, so DDL will be applied at
exactly the appropriate point in the transaction stream on
subscribers.
- Note that this rectifies bug #137, *execute script does not
get applied in the correct order*
- Sequence handling changes somewhat (see Bug 304).
Previously, when DDL was processed as "an event," this would
naturally mean that sequence values would have values based on
the previous SYNC, and, at the end, capture values at the end
of the DDL SYNC, and the "DDL Sync" could make use of those
values.
Now, with DDL being processed along with ordinary replicated
DML within a SYNC, sequence usage by DML needs to be captured
and mixed into the updates. Now, each time a DDL statement is
applied, sequences need to be updated, which can now take
place in the middle of a SYNC.
- Expected effects
- Lower processing overhead in slon process :: By using COPY,
all that need be done is for slon to "stream" data from the
provider to the subscriber. There is no longer a need to
generate or process individual queries.
- Lower memory consumption in slon process :: Again, from using
COPY, memory consumption is based solely on how much data
is buffered in memory as part of the streaming. A query
that is larger than the buffer is simply handled by letting
data stream through the buffer; the large query does not
need to be instantiated in memory, as was the case in older
versions of Slony.
- Less query processing work on subscriber database :: The use
of prepared statements should dramatically reduce the
amount of computational effort required for query parsing
and planning.
- Correct ordering of DDL within processing :: Bug #137
described a problem persisting from version 2.0 to 2.1
where DDL could be performed in the wrong order with
respect to logged data changes. The shift of "script" data
from ~sl_event~ to ~sl_log_script~ resolves this issue.
** Bugs fixed in the course of the release
These are expected to represent bugs that were previously present,
not a consequence of problems introduced and subsequently fixed in
the 2.2 branch.
- No bug :: Make log_truncate() SECURITY DEFINER
- Bug 250 :: Log shipper does not report application name - add in setting of GUC
- Bug 252 :: cloneNodePrepare() store a valid conninfo in sl_path
- Bug 273 :: Slon can try to pull data from a behind provider. Fix is
to not force the event provider to be part of the
providers unless we don't find any provider at all
otherwise.
- Bug 286 :: Support to compile against PG 9.3
- Bug 297 :: Make test_slony_state-dbi.pl work with PG 9.2
- Bug 299 :: Fix a bug in MOVE SET where slon might pull data from the old origin using
SYNC values for the new origin