You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The support for massive files is already pretty amazing--I just ran a 75M file and it worked perfectly. The only thing I would change might be mentioning in the docs that the browser may complain a few times that it is frozen--not to worry, the file will process if you press "Wait" a few times.
I was having problems getting it to work with large files as well, but I was able to determine that the following helped me:
Removing "_id":ObjectId("<number>") from the json
In my case I had a massive mongodb dump from a journalist, and I removed them all using the following regex "_id" : \w+\("\w+"\), \n
Make sure the JSON is valid. Online validators can help. I found this one was able to take my 18Mb sized JSON dump I pasted via my clipboard.
Convert JSONL to JSON. By adding [ to the front and ] to the back I was able to convert the JSON Streaming (aka LDJSON) to valid JSON
My JSON had 13,122 urls and I was worried that this would break Eric's anonymous API request limit, but surprisingly it did not hit a limit and break.
I used Firefox Developer Edition with the script timeout setting changed to be its maximum value: 2147483647 and was able to download the CSV after leaving it to run for about 15 mins.
There should be an option to simply "upload" (or point to) the input file and download the output file.
Using textareas means that the browser just gets stuck on large-ish files trying to show it all, making everything laggy.
Solution: Don't show text for large files.
The text was updated successfully, but these errors were encountered: