-
Notifications
You must be signed in to change notification settings - Fork 504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SdFat recommended usage for long term data logging #467
Comments
Sorry I didn't get back sooner. I am in am in an area of California that had a three day power outage. For a long term applications like this I would use a design that does not create or extend files. I would avoid using timestamps in directories. This would avoid any corruption of directories or the file allocation table I would prepare an SD with an empty file for each day of logging. I would open a file at the start of each day and rewrite the file using 512 byte writes. Writing 512 byte blocks on 512 byte boundaries will prevent read/update/write operations.
I can't comment on a specific MCU. Reliability depends on how the MCU is integrated into the system and the board support package's system software.
I would use one of the popular mid-range cards. I like Samsung and SanDisk. High end cards buy performance, not reliability.
If data rates allow, use shared mode. Shared mode forces the card to program flash from the cards internal RAM buffers each time chip select is raised.
I don't understand this question.
I use sockets with 10k pullups.
See above.
Either is OK. FAT32 is limited in file size but you should not write huge files.
Testing is key to reliability. If possible build several systems and test with faster rates. Edit: For SD cards on SPI, all modern cards have sufficient performance. Don't pay for high end cards that are for 4K video. |
Very interesting things discussed here. I was also curious about the similar thing. @greiman you mentioned about
So what I want to do is everytime my system reboots I want to create a new directory and inside that one I have multiple log files say with fixed number of lines of the data. Like each log file has 10K number of lines. So this approach is what you think may result in corruption of directories in file allocation table? |
The main way a FAT file system is corrupted is when a crash happen during directory update or cluster allocation. FAT and exFAT were not designed to be crash safe. Some Linux systems are better but the best way is a journaling file system. https://en.wikipedia.org/wiki/Journaling_file_system https://pages.cs.wisc.edu/~remzi/OSTEP/file-journaling.pdf So avoiding crashes is the best approach. |
This two file system is very first time for me. And I think not stright to implement that's why do not heard about that or have any library supporting the same in arduino. |
Journaling file systems are complex to implement. FAT/exFAT is not good base for such a file system. Arduino is a not suitable for building a reliable system if crashes are possible. Arduino could never be certified for safety critical use. Here is the kind of system that can be safe. I was involved with its early development in the late 1980s. |
Hi Bill,
Do you have recommendations for usage of SdFat to achieve high reliability, long term operation.
I'm implementing SdFat 2.2.2 on a project logging 25Hz acceleration data for multiple years at a time. The MCU will wake every 1s from deep sleep mode and stream the data from the accelerometer FIFO to a MicroSD card. This will permit card buffering time if required.
We will need a 16GB+ card due to the amount of data generated, so will need to use FAT32 or exFAT. The MicroSD card will be powered from its own dedicated regulator to allow power cycling via the EN pin incase of lockup.
Thankyou very much,
Bryn
The text was updated successfully, but these errors were encountered: