Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delimiter issue #73

Open
redmosquitoo opened this issue Oct 4, 2021 · 2 comments
Open

Delimiter issue #73

redmosquitoo opened this issue Oct 4, 2021 · 2 comments

Comments

@redmosquitoo
Copy link

I am reading table from SF using soql:

df = spark.read.format("com.springml.spark.salesforce").option("soql",sql).option("queryAll","true").option("sfObject",sf_table).option("bulk",bulk).option("pkChunking",pkChunking).option("version","51.0").option("timeout","99999999").option("username", login).option("password",password).load()

and whenever there is a combination of double-qoutes and commas in string it messes up my table schema, like so:

in source:
Column A | Column B | Column C
000AB | "text with, comma" | 123XX

read from SF in df :
Column A | Column B | Column C
000AB | ""text with | comma""

Is there any option to avoid such cases when this comma is treated as delimiter?

@phitoduck
Copy link

Hey @redmosquitoo, I'm experiencing this, too. Did you ever come up with a workaround?

@redmosquitoo
Copy link
Author

redmosquitoo commented Oct 11, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants