Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2530] AdaptiveImage: control skipping/requiring transformation via OSGi config #2534

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

Stefan-Franck
Copy link

Q                       A
Fixed Issues? Fixes #2530
Patch: Bug Fix? No
Minor: New Feature? Yes
Major: Breaking Change? No
Tests Added + Pass? Yes
Documentation Provided Yes (code comments and or markdown)
Any Dependency Changes? No
License Apache License, Version 2.0

Ensures users can configure which image types are never transformed (currently: svg and gif) and which image types are always transformed (currently: none) via OSGi.
By this, transformation can be easily disabled for other image types, but it is also possible to require the transformation for jpg/jpeg, thus ensuring custom compression settings are used - currently, editors in AEM have to crop jpg images by a single pixel just to deliver them in a compressed way - otherwise, web performance and green web goals are not met.
The default configurations for these settings ensure backward compatibility, as they ensure the system behaves as before.

@Stefan-Franck
Copy link
Author

As discussed with @kwin I updated the change such that gif and svg are always exempt from transformation (as the library cannot handle them). The field is still configurable in order to extend (e.g. adding webp to the files spooled directly).

@joerghoh
Copy link
Contributor

Hi @Stefan-Franck, I would like to understand your usecase to force a transformation a bit better. I don't get why I would need to transform an image (again) which is already a perfect match? You mention "compression", but why don't you compress them already on author (as part of the AssetUpdate workflow or in Asset compute) to your preferred compression level?

(Personally I am not a fan of the AdaptiveImageServlet at all, I have seen it to cause OOMs too often.)

@joerghoh
Copy link
Contributor

On a side-note: Can you add a testcase to validate the "enforce transformation" case? Thanks!

@Stefan-Franck
Copy link
Author

Hi @joerghoh and thanks for the fast reply!

I'm completely with you - a perfect match does not require an additional transformation. But in reality, more often than not, images with the highest qualities and sometimes highest resolutions are uploaded to the DAM - and that impacts the delivery. It's also not easy to identify and optimize them if you are working in large installations with thousands of images. What happens is a slow degradation of the page weight, with more and more images adding 1 MB or more to it - thus impacting performance (and CoreWebVital rankings) and also impacting GreenWeb initiatives.

That's why the solution is implemented as an option - out of the box, it works without additional transformation. Only if you make the deliberate choice to have a different behaviour, you can do so.

I will add a test case in the course of this week.

@joerghoh
Copy link
Contributor

@Stefan-Franck But isn't it just a workaround you are proposing, and potentially not even addressing the real problem?

In your case you would need a set of "web-optimized" renditions with a special emphasis on size; and even a rendition of the correct dimension is present a special size-optimized rendition should be created.

(In my opinion the AdaptiveImageServlet is not helping the GreenWeb initiative either ...)

@Stefan-Franck
Copy link
Author

But where does that special web-optimized rendition come from? Sounds to me like a manual process for thousands of assets that could be optimized on delivery. If somebody has such a process in place, they can use it. But if not, enforcing compression for all compressable assets is a much more reliable approach. Furthermore, the compressed assets should be cached for a rather long time, thus the rendering cost of the AdaptiveImageServlet shouldn't happen in most cases.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Jul 8, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

@Stefan-Franck
Copy link
Author

@joerghoh added three test cases:

  1. disabling transformation for asset types
  2. direct spooling is active for asset without forced transformation
  3. with forced transformation, the same asset is delivered compressed.

@Stefan-Franck
Copy link
Author

@joerghoh - As you can see from the test cases, the full-quality Adobe logo has 250kB, the compressed one only has 130kB, already a reduction by 50 % with the default settings (compression rate around .82). We sometimes use compression factors of .5 - .6, so it really reduces the page weight considerably if the compression is applied or not.
As you cannot guarantee uploaded images adhere to any compression standards (only image size in general, which depends on many other factors), you will always end up with non-optimized images in the repository. Being able to easily optimize on delivery would help editors a lot - they currently crop every image on every usage by 1px only to make sure the image is compressed, which is very tedious and error-prone.
The change would ensure the system behaves like before, and only if one client wants the compression activated (or webp images delivered as-is), they can apply the corresponding configurations.

@joerghoh
Copy link
Contributor

joerghoh commented Aug 3, 2023

@Stefan-Franck thanks for the feedback, I will feed it back to our assets team. While I don't have any data to contradict your observation, I find that you should be able to configure the processing profiles in a way, that you get a size-optimized rendition. There should not be any need to use a process like this to generate the renditions on the flight.

@Stefan-Franck
Copy link
Author

Also an interesting approach, and then have the image component including that compressed rendition instead of the original one - that would work well. So we could decide the time the load occurs from on-upload (how about bulk uploads when the new design agency drops another 1.000 images) to on-delivery (how about many parallel requests and generally longer TTFBs during delivery).

@kwin
Copy link
Contributor

kwin commented Aug 3, 2023

Currently the algorithm in

protected EnhancedRendition getBestRendition(@NotNull Asset asset, int width, @NotNull String mimeType) throws IOException {
doesn't distinguish between original and post-processed renditions, so as long as you cannot skip originals the approach with processing profiles won't work reliably.

@joerghoh
Copy link
Contributor

joerghoh commented Aug 3, 2023

@kwin In the my ideal world you would not need to use the AdaptiveImageServlet, because the assets you are delivering are available already with the required dimensions and quality.

@bobvanmanen
Copy link
Member

bobvanmanen commented Aug 4, 2023

I am trying to understand the issue. The problem is that the renditions we produce during asset processing are not as efficient as what the AdaptiveImageServlet creates assuming the same format, dimension, etc?

Or is the original asset not as efficient as it can be?

@Stefan-Franck
Copy link
Author

Stefan-Franck commented Aug 5, 2023

@bobvanmanen - the AdaptiveImageServlet takes - if possible - the original asset from the DAM. As it is spooling the original directly, no compression is applied. This leads to AEM delivering unnecessarily large, uncompressed images via this servlet. At my current customer, editors actually crop all images by 1 pixel just to ensure the compression is applied.
The solution of this PR allows defining image types that will always be compressed, regardless of editorial applied transformations like the cropping.
@joerghoh suggested the alternate approach to have the correct renditions in the DAM up front, so they can be delivered directly without transformations.
This would allow an even faster delivery on request, as the processing load is shifted from the delivery time to the upload time. I'm a bit afraid about larger bulk uploads or design changes (you need a new image width for all assets). It would also take a lot of storage space for AEM. In order to implement this approach, we would need to change the DAM Update Asset Workflow to generate compressed and correctly resized (proportionally, not stretched or padded) images according to the design. Those assets could be spooled directly, if the AdaptiveImageServlet resolves them properly (which does not work well according to @kwin ).

@Stefan-Franck
Copy link
Author

@joerghoh @bobvanmanen - any thoughts?

Copy link

Quality Gate Passed Quality Gate passed

Issues
0 New issues

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AdaptiveImage: Configurability of when to apply transformation by file type
5 participants