Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional "post-false" reads needed with non-default CommandBehavior #2472

Open
mgravell opened this issue Apr 26, 2024 · 16 comments
Open

Additional "post-false" reads needed with non-default CommandBehavior #2472

mgravell opened this issue Apr 26, 2024 · 16 comments
Labels
✔️ Repro Available Issues that are reproducible with repro provided.

Comments

@mgravell
Copy link
Member

mgravell commented Apr 26, 2024

context, investigating DapperLib/Dapper#2077 , I put together this test, which is just using raw ADO.NET (no Dapper), using SingleResult, SingleRow and "both"; I am familiar with the problem of trailing faults in the TDS flow, hence Dapper does always read-to-end, but this test shows that extra row-reads after reader.Read() has reported false are necessary to observe the exceptions, with the number of extra Read() calls being query-dependent. The coded example needs at least 2 extra reads, but the OP has reported higher numbers by changing the SQL

Having to perform extra reads after seeing reader.Read() report false feels like a bug.

This behaviour is currently consistent between System.Data.SqlClient and Microsoft.Data.SqlClient

to be explicit, to see the exception in the example shown, we need to:

  1. we call Read() to test for the expected row, get true
  2. we call Read() to test for unexpected rows, get false
  3. we call Read() again "just because", get false
  4. we call Read() once more, boom: SqlException

it seems like this SqlException should have happened before Read() returned false at step 2, or should not have occurred at all; but needing to call Read() an arbitrary and unpredictable number of extra times is... not good.

risk: faults go unobserved, transactions get incorrectly committed, human sacrifice, dogs and cats living together, MASS HYSTERIA

simplified repro (using reader.NextResult() does not help, note):

[Fact]
public void ManualADONETSimplified()
{
    // note test shows *undesirable* outcome as "pass"
    using var conn = GetOpenConnection();
    using var cmd = conn.CreateCommand();
    cmd.CommandText = SQL;
    using var reader = cmd.ExecuteReader(CommandBehavior.SingleResult);

    Assert.True(reader.Read()); // 1. expected data
    Assert.False(reader.Read()); // 2. check for unexpected data
    Assert.False(reader.Read()); // 3. just because
    Assert.ThrowsAny<DbException>(() => reader.Read()); // 4. boom
}

// from https://github.com/DapperLib/Dapper/issues/2077
const string SQL = """
-- MOVE the 'SELECT 7;'-Statement here, to increase the required while(read()) calls by one.

BEGIN TRANSACTION
BEGIN TRY
    SELECT 7; --SELECT any value INSIDE TRANSACTION

    --FORCE Error
    DECLARE @intvar INT
    SET @intvar = 'A'
COMMIT TRANSACTION
END TRY
BEGIN CATCH
    ROLLBACK TRANSACTION
    ;THROW 50000, 'ERROR in Transaction', 1;
END CATCH
""";
@JRahnama JRahnama added the 🆕 Triage Needed For new issues, not triaged yet. label Apr 26, 2024
@kf-gonzalez2 kf-gonzalez2 added ✔️ Repro Available Issues that are reproducible with repro provided. and removed 🆕 Triage Needed For new issues, not triaged yet. labels Apr 30, 2024
@David-Engel
Copy link
Contributor

I'm curious how other providers behave in this scenario. Particularly subsequent Read() calls, after Read() has returned false.

In SqlClient. if you specify CommandBehavior.SingleResult, any data after the first result set is drained and results/errors discarded/ignored. The draining happens on NextResult() or Close() of the reader. So the Read() behavior after false is a bit strange. That behavior doesn't seem well defined in the ADO.NET API and I suspect you may experience different results in different data providers. We should probably make the experience consistently return false every time. I don't think it should be advancing to subsequent results.

If you want to only return the first result of a statement but still return errors from subsequent result sets, don't use CommandBehavior.SingleResult. Read through all results and result sets and decide what to return or discard, including how to surface multiple potential errors from subsequent result sets.

@mgravell
Copy link
Member Author

mgravell commented May 2, 2024

There's only one result in the query shown. I think what's particularly weird in this example is that the exception manifests if we call read some additional dummy times. If it didn't manifest at all, it would be vexing but at least comprehensible. Always manifesting the exception (i.e. taking note of faults while scanning over the trailing TDS) would IMO be preferable, of course, but "manifests if you call read an extra few times for no reason" is just... odd.

@David-Engel
Copy link
Contributor

There's only one result in the query shown.

That's not entirely true. If there were zero subsequent errors, sure, the entire statement would only produce a single result. But SELECT 7; is a result in itself. There are no errors within that result set. The error thrown by SET @intvar = 'A' is separate and the server sends it as part of a subsequent result set. SqlClient will throw the error on the call to NextResult().

I do agree that the subsequent Read() behavior is strange. But applications should not continue calling Read() after a Read() returns false. So that behavior isn't high priority to fix.

@roji
Copy link
Member

roji commented May 3, 2024

I'm of course not super familiar with TDS, but here are some thoughts from the Npgsql perspective... We had a somewhat related discussion in npgsql/npgsql#4377. The basic principle we arrived at was that Npgsql shouldn't swallow exceptions - these should be made visible to the user. One motivation for that is that there may e.g. be SQL statements that don't produce a resultset (e.g. INSERT/UPDATE, or SET as above), the user absolutely wants to be notified of failure for those cases, and DbDataReader is the API through which those exceptions can be made visible. So we decided that the least bad option here was to throw exceptions from Dispose, given that all DbDataReaders must eventually be disposed by the user (so it's a point where errors are bound to become observable).

(Yes, throwing from Dispose indeed isn't ideal, but we still thought this was the least bad option (compared to e.g. silently swallowing), plus there's ample prior art for throwing from Dispose/DisposeAsync, especially where that can cause I/O (e.g. FileStream))

From that perspective, the fact that an additional Read throws - after a prior Read returned false - doesn't necessarily seem like a huge problem to me; the main quesiton for me is whether, in that scenario, calling Dispose (and/or NextResult) instead of Read after Read return false would also throw the same exception. If so (@David-Engel you seem to indicate that yes, at least for NextResult), I'm not sure I see a big problem.

@vonzshik may have more to contribute here, plus I'd definitely be interested to hear your thoughts @David-Engel @mgravell.

@mgravell
Copy link
Member Author

mgravell commented May 3, 2024

From that perspective, the fact that an additional Read throws - after a prior Read returned false - doesn't necessarily seem like a huge problem to me

But how would you know to do this, and how many extra reads do you need? I agree the key thing here is "don't swallow exceptions", so what I'm trying to get to here is:

what is the expected / normal / correct API usage that leads to getting the data and exceptions that occur?

Calling Read() an unknown / arbitrary number of random times seems an untennable usage pattern, so: what is the expected usage? From there, we can sensibly ask: "does SqlClient work correctly for that usage?"

For me, the expected usage is, roughly:

using var reader = cmd.ExecuteReader(...); // possibly async
do
{
    if (reader.Read())
    {
        // non-empty result set
        // not shown: inspect column metadata once only per grid
        do
        {
            // process the current row in the current grid using the column metadata
        }
        while (reader.Read()) // process each row in the current grid
    }
}
while (reader.NextResult()); // make sure we process each grid

This is generalized; if we don't actively expect additional grids we might do something more like:

using var reader = cmd.ExecuteReader(...); // possibly async
if (reader.Read())
{
    // non-empty result set
    // not shown: inspect column metadata once only per grid
    do
    {
        // process the current row in the current grid using the column metadata
    }
    while (reader.Read()) // process each row in the primary grid
}
// first grid has been fully consumed; blitz over remaining grids
while (reader.NextResult())
{
    while (reader.Read())
    {}
}

And with either of those; SqlClient currently risks silently missing exceptions unless we tack a random number of Read() calls after the do loop. So, I conclude either:

  1. the above is not a correct statement of the expected use-case, in which case: what is?, or
  2. SqlClient has an incorrect implementation

@roji
Copy link
Member

roji commented May 3, 2024

But how would you know to do this, and how many extra reads do you need?

When you're done with the reader, you just dispose it (or when you're done with a resultset, you call NextResult), and should be guaranteed to get any exceptions, no? Why would you need to do extra reads at that piont?

For the code you pasted above, the moment you get false from Read, you execute NextResult right? If that's guaranteed to throw any exceptions (I think @David-Engel is saying it is), then isn't that sufficient?

@vonzshik
Copy link

vonzshik commented May 3, 2024

And with either of those; SqlClient currently risks silently missing exceptions unless we tack a random number of Read() calls after the do loop. So, I conclude either:

  1. the above is not a correct statement of the expected use-case, in which case: what is?, or
  2. SqlClient has an incorrect implementation

IMO it's quite clear that SqlClient has an incorrect implementation, at the very least concerning Read. There is absolutely no reason why Read should ever be able to go outside of the current result set, no matter how many times it's called.

For the code you pasted above, the moment you get false from Read, you execute NextResult right? If that's guaranteed to throw any exceptions (I think @David-Engel is saying it is), then isn't that sufficient?

Nope, NextResult isn't going to throw as long as CommandBehavior.SingleResult is specified because SqlClient it just going to ignore every single error after the first result set.

@roji
Copy link
Member

roji commented May 3, 2024

IMO it's quite clear that SqlClient has an incorrect implementation, at the very least concerning Read. There is absolutely no reason why Read should ever be able to go outside of the current result set, no matter how many times it's called.

I agree (and @David-Engel also wrote that he finds this strange) - but at the same time applications shouldn't call Read() after another Read() has returned false. So yes, I do think this is something that should ideally be fixed (by Read() immediately returning false after another false-returning Read()), but it seems like a very low-value "theoretical correctness" fix.

Nope, NextResult isn't going to throw as long as CommandBehavior.SingleResult is specified because SqlClient it just going to ignore every single error after the first result set.

OK, that's what I was missing here, thanks. So yes - I agree that regardless of SingleResult/SingleRow/anything else, both NextResult() and Dispose() should always throw any errors communicated by SQL Server. This seems much more important to me than the first Read() question.

@mgravell
Copy link
Member Author

mgravell commented May 3, 2024

Personally I'd argue that Dispose() is a tricky one - sine you could already be exiting due to a fault, throwing in Dispose() can mask one fault with another (although looking at that another way: if two independent failures happened, I guess it isn't really correct to say either is automatically better); personally I'd be fine with it just throwing when using NextResult() to exhaust the result grids. It is a subjective one, though, and yes: several things do throw in Dispose() - WCF used to do that, and it was a constant source of pain :p

The Read() question: agree, that is not critical; I'm far more interested in the "getting the exceptions when used naturally" part of the question here. The only reason I even mentioned it was that the OP on the Dapper question observed that one "fix" was to add random Read() calls. I'm not saying this is a good thing, and yes: I agree that in a reasonable world: Read() would never jump into later grids.

@vonzshik
Copy link

vonzshik commented May 3, 2024

Speaking of throwing from Dispose. The primary reason we added that to npgsql is because for returning clause postgres might return the value first and an error afterwards. So users were just reading the first row and immediately disposed the reader, in which case they had no idea that that particular query actually failed and there was no insert. Can't say for certain whether it's possible to repro a similar problem in SqlClient, but having to throw from Dispose any errors received from SqlServer seems much better rather than giving users an impression that there is no problem.

@mgravell
Copy link
Member Author

mgravell commented May 3, 2024

If we look at the example in the question: that's pretty much exactly the scenario (although the insert/update etc isn't shown) - so yes, it is definitely possible

@roji
Copy link
Member

roji commented May 3, 2024

@mgravell yeah, throwing from Dispose is indeed tricky, and indeed previous versions of Npgsql swallowed exceptions when they occured during disposal. But as @vonzshik writes, between this behavior (where the user isn't made aware) and throwing from dispose (which can cause the original exception to be masked), the former still seems like the better alternative. It's true that users can call NextResult() - or even DbDataReader.Close() to get those exceptions, at which point Dispose is guaranteed to not throw (that's probably what I'd do); but the point is that most users can't be expected to be aware of the nuances, and there's lots of code out there which just reads a row and then disposes, and we don't want exceptions to be lost for those cases.

@David-Engel I get the point about SingleResult now being suitable when there are multiple statements - though I understand this option to be about results; in other words, it could make sense to use this option with multiple statements, as long as only one of them produces a resultset. In that case, it still seems very important to surface errors from statements after the resultset-producing one.

I also wonder how things work regardless of SingleResult.. In other words, if I do a regular ExecuteReader (without SingleResult) with multiple statements, and call Dispose() immediately after reading the first row of the first resultset, am I guaranteed to see exceptions for errors from later statements?

@David-Engel
Copy link
Contributor

I get the point about SingleResult now being suitable when there are multiple statements - though I understand this option to be about results; in other words, it could make sense to use this option with multiple statements, as long as only one of them produces a resultset. In that case, it still seems very important to surface errors from statements after the resultset-producing one.

@roji We've always surfaced errors in the context of result sets. I do understand and agree that it's bad to ignore errors. However, I'm very hesitant to have Dispose() throw. I think that will have a sizable impact. We've fixed issues in the past where Dispose() was throwing, counter to expected behavior/convention. The most recent one was brought to us by the SQL tools team.

I also wonder how things work regardless of SingleResult.. In other words, if I do a regular ExecuteReader (without SingleResult) with multiple statements, and call Dispose() immediately after reading the first row of the first resultset, am I guaranteed to see exceptions for errors from later statements?

No. Even without SingleResult, if you don't move on to subsequent results, SqlClient will not throw any errors. You must consume all results to see results and/or errors associated with them. This behavior is consistent across all the SQL drivers, as far as I know. It is a common question that pops up. We even called it out specifically in the JDBC docs. We get 1-2 JDBC issues a year from people who don't understand the multiple result set concept.

Here's another blog about it: The Curious Case of Undetected SQL Exceptions

Even System.Data.Odbc behaves this way. If you don't call NextResult(), you won't see the error. I do see a small behavior difference between SqlClient and Odbc, though. Using the repro T-SQL, System.Data.Odbc will throw on NextResult(), even when SingleResult is specified, whereas SqlClient won't throw if you call NextResult() when SingleResult is specified.

I think if your application might be running queries that could produce multiple results or errors that aren't directly associated with one result, you should ensure that you are reading everything appropriately. This means calling NextResult() until it returns false. The SingleResult option feels dangerous to me, in general, and I would only use it if I was sure I didn't care about anything after the first result.

@mgravell
Copy link
Member Author

mgravell commented May 3, 2024

This means calling NextResult() until it returns false.

The longer version of the test (second link in the top post here) does read all the grids. The point is that this doesn't surface the exceptions. This feels unexpected. My expectation from SingleResult / SingleRow is that they optimize away row data; it feels unexpected that it also silently discards exceptions. If that is the accepted behaviour, then I must conclude that Dapper cannot ever use those hints as an optimization, even when I know from context that I'm expecting a single result grid or a single row. I can make that change, but it feels ... dirty and sad :(

@David-Engel
Copy link
Contributor

@mgravell I'm not opposed to changing the way SingleResult/SingleRow behaves here. I think it might be useful for NextResult() to throw errors, while silently discarding row data. We should also consider what to do with warning/info messages in the same scenario. I haven't looked into those.

@roji
Copy link
Member

roji commented May 5, 2024

However, I'm very hesitant to have Dispose() throw. I think that will have a sizable impact. We've fixed issues in the past where Dispose() was throwing, counter to expected behavior/convention. The most recent one was brought to us by the SQL tools team.

I also wonder how things work regardless of SingleResult.. In other words, if I do a regular ExecuteReader (without SingleResult) with multiple statements, and call Dispose() immediately after reading the first row of the first resultset, am I guaranteed to see exceptions for errors from later statements?

No. Even without SingleResult, if you don't move on to subsequent results, SqlClient will not throw any errors. You must consume all results to see results and/or errors associated with them. [...]

Understood.

For what it's worth, we had this exact conversation on the Npgsql side (npgsql/npgsql#4377), and ended up deciding the other way. In other words, there definitely are disadvantages to throwing from Dispose (e.g. masking the original exception as @mgravell mentioned). And I understand that some of your consumers (e.g. SQL tools team) may prefer not to have to deal with exceptions from Dispose. But at the end of the day, all that seems to me to be less important than the silent swallowing of exceptions, which is the current behavior for any code not explicitly calling NextResult(). It's a very tricky and undiscoverable requirement to require users to call NextResult() in this way just to get possible exceptions.

(of course, that's orthogonal to the two other issues discussed above, which are about fixing SingleResult/SingleRow specifically, and also ensure that Read() never goes beyond the current resultset).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
✔️ Repro Available Issues that are reproducible with repro provided.
Projects
Status: Needs Investigation
Development

No branches or pull requests

6 participants