Ticket #2886 (assigned enhancement)

Opened 7 years ago

Last modified 6 years ago

some files shouldn't be compressed by jffs2

Reported by: tomeu Owned by: dsaxena
Priority: normal Milestone: 8.2.0 (was Update.2)
Component: kernel Version:
Keywords: Cc: pascal, jg, dwmw2
Action Needed: Verified: no
Deployments affected: Blocked By:
Blocking:

Description

Storing a PDF of 35MB takes 25s. to be written to NAND, processor bound.

If this is caused by JFFS2's on-the-fly compression, marking those files as already compressed using chattr or xattr would make these operations much faster.

Also, I presume reading perf would also be improved.

Change History

  Changed 7 years ago by kimquirk

  • milestone changed from Untriaged to Trial-3

  Changed 7 years ago by jg

  • milestone changed from Trial-3 to First Deployment, V1.0

  Changed 7 years ago by pascal

Maybe jffs2 should detect it all by it self.

It is trivial to detect that a block didn't compress well, and then assume that the next equally sized block will not compress either and the just store the data. This will result in testing in the order of sqrt(filesize) bytes (is my math right?) to see if the file will compress, or 187KB in tests for the 35MB file. Increase the increment to 4 times and you need to compress something like 13KB to test the entire file.

Perhaps you can use a fixed maximum block size after initially determining that it will not compress, so the detection of compressible data later in the stream does not suffer so badly.

  Changed 7 years ago by pascal

  • cc pascal added

  Changed 7 years ago by tomeu

Jim, can we up the priority of this one?

Taking all the cpu during downloads because of compression (#5235) when most files are already compressed is no good. It slows all other operations in the laptop, and I think we cannot expect the user to wait for the download to finish before doing other tasks.

Pascal's suggestion would be much better than having to mark the files as uncompressed based on their mime type.

  Changed 7 years ago by tomeu

  • cc jg added

follow-up: ↓ 9   Changed 7 years ago by ixo

How about auto-detection of files already compressed? Aka PNG , JPG, gz, zip, etc... is an easy test of the first 10 bytes, help terminal the 'likely format of the file?

  Changed 6 years ago by cjb

I have an activity that serves content out of a large, heavily-compressed .bz2 on demand. It would be much faster if I could disable jffs2 compression for those nodes.

in reply to: ↑ 7   Changed 6 years ago by dsaxena

Replying to ixo:

How about auto-detection of files already compressed? Aka PNG , JPG, gz, zip, etc... is an easy test of the first 10 bytes, help terminal the 'likely format of the file?

Ick, no. Probably the best way to do this is via an xattr and/or possibly a new generic O_NOCOMPRESSION flag that can be used by any FS since compression is available on various file systems these days.

There are some patches out there in the embedded world for something along these lines (though not on JFFS2). I'll poke some folks about this.

  Changed 6 years ago by cjb

There are some patches out there in the embedded world for something along these lines (though not on JFFS2). I'll poke some folks about this.

Thanks. I'm sure we'd more than double the speed of the wikipedia activity by being able to stop zlib-uncompressing the already-heavily-compressed data -- so this is useful even if it's not automatic to start with.

  Changed 6 years ago by dsaxena

  • cc dwmw2 added
  • owner changed from dwmw2 to dsaxena
  • status changed from new to assigned
Note: See TracTickets for help on using tickets.