AppData and ProgramData are the Windows-approved folders for storing data generated or used by a program at runtime.
Local/Local Low/Roaming are intended for different kinds of data.
Oculus is a Facebook-owned company, so it's not surprising you're seeing one if you've got the other.
Thanks for the response! I appreciate another set of eyes on this. I can't find anyone having actually demonstrated testing this anywhere else, other than the one mentioned post.
​
>df may be misleading,
I'm used to df being a bit out of whack with reality, having used ZFS and BtrFS with compression heavily over the last few years. But the golden test has always been writing zero files and checking sizes before and after to compare. BtrFS obviously has better tools in the shape of "btrfs fi du" and "compsize" to explicitly measure deduplication and compression, which is nice. But the "df, dd if=zero, df" approach definitely works to test other compressed file systems, including BtrFS, ZFS and even NTFS mounted over SMB.
​
>Also, in the kernel doc I see no mention of zstd as a compression algorithm:
At mount time it was happy with "zstd". But yeah,I also tried lzo and lzo-rle (the latter is what I'll likely run on my old laptop if this works, as zstd will kill my old beast) .
​
>Have you tried telling f2fs to explicitly compress the file using chattr +c file?
The "chattr -R +c" command marked the directory +c, and any new files I created were automatically +c'ed as well when I checked with "lsattr". I can try creating a zero-byte file, changing the attribs on that explicitly, and then appending to it instead. But I'm reasonably confident everything was set as required by the documentation. The additional mount option specifying the extension should also have triggered the forced compression, according to the docs.
I'll give the "append" method a try and report back.
Disclaimer: I haven't played with this at all, just read about it ...
I don't see anything wrong in your steps. I do see a note in the debian wiki that df
may be misleading, though I guess that's not the case for your scenario since you said the volume is full after 5GB of zeros.
Also, in the kernel doc I see no mention of zstd as a compression algorithm:
> compress_algorithm=%s Control compress algorithm, currently f2fs supports "lzo" and "lz4" algorithm.
... but I guess that's not the trouble here either since you said you tested with lzo
as well.
Have you tried telling f2fs to explicitly compress the file using chattr +c file
?
The Linux kernel filesystem docs are a trove of information, but it's definitely a jump into the deep end of the pool.
vfs.txt is an overview of the overarching filesystem abstraction, and then you can dig through the various other docs for details.
Its been a while since this has been posted, and there has been a lot of BTRFS news but not much on ZFS, so its good to bring this up again. I believe most or all of these differences still exist.
I'm not aware of any projects that really fit your use case (most of the ones getting press in this area right now are all blockchain stuff).
Maybe the open source project you were remembering was Tahoe Lafs? https://tahoe-lafs.org/trac/tahoe-lafs
Tahoe-LAFS is the only FS I know that has tunable FEC: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ#Q2_what_is_erasure_coding
Edit: here's some discussion on adding it to Ceph: https://tracker.ceph.com/projects/ceph/wiki/Erasure_encoding_as_a_storage_backend
Huh, posted before I tried to download. I agree that's odd, and I'm not inclined to give away my real email.
Perhaps use a throwaway or a temp from someplace like: http://www.fakemailgenerator.com/ to get a copy?