-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Truncate Column Rename Step #93
Comments
Hello, |
Hello, for sure, the error I get is the following generic one:
that if I am not mistaken comes from this codepath in the JDBC Driver:
and "masks" the root cause 😞 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
If I understand the order of operations in the
SQLPushDownRule.scala
file correctly, the first step after adding the shared context to all the Relations is to rename all the columns in each Relation to unique names following thenormalizedExprIdMap
logic:singlestore-spark-connector/src/main/scala/com/singlestore/spark/SQLPushdownRule.scala
Lines 41 to 45 in 8e70dff
This makes sense, to avoid
Duplicate Name
issues later on for example inJoin
s since they are wrapped around aselectAll
statement instead of a select statement with aliases:singlestore-spark-connector/src/main/scala/com/singlestore/spark/SQLGen.scala
Line 470 in 8e70dff
singlestore-spark-connector/src/main/scala/com/singlestore/spark/SQLGen.scala
Line 488 in 8e70dff
singlestore-spark-connector/src/main/scala/com/singlestore/spark/SQLGen.scala
Line 502 in 8e70dff
The issue that I am facing with this approach is that:
the query string becomes too long and thus leading to the schema fetch and subsequent
PrepareStatement
code to fail.Tried:
I was wondering if there is any guidance in scenarios like the one I am facing ?
Thanks
The text was updated successfully, but these errors were encountered: