-
Notifications
You must be signed in to change notification settings - Fork 753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2530] AdaptiveImage: control skipping/requiring transformation via OSGi config #2534
base: main
Are you sure you want to change the base?
Conversation
…image types requiring adaptation
As discussed with @kwin I updated the change such that gif and svg are always exempt from transformation (as the library cannot handle them). The field is still configurable in order to extend (e.g. adding webp to the files spooled directly). |
Hi @Stefan-Franck, I would like to understand your usecase to force a transformation a bit better. I don't get why I would need to transform an image (again) which is already a perfect match? You mention "compression", but why don't you compress them already on author (as part of the AssetUpdate workflow or in Asset compute) to your preferred compression level? (Personally I am not a fan of the AdaptiveImageServlet at all, I have seen it to cause OOMs too often.) |
On a side-note: Can you add a testcase to validate the "enforce transformation" case? Thanks! |
Hi @joerghoh and thanks for the fast reply! I'm completely with you - a perfect match does not require an additional transformation. But in reality, more often than not, images with the highest qualities and sometimes highest resolutions are uploaded to the DAM - and that impacts the delivery. It's also not easy to identify and optimize them if you are working in large installations with thousands of images. What happens is a slow degradation of the page weight, with more and more images adding 1 MB or more to it - thus impacting performance (and CoreWebVital rankings) and also impacting GreenWeb initiatives. That's why the solution is implemented as an option - out of the box, it works without additional transformation. Only if you make the deliberate choice to have a different behaviour, you can do so. I will add a test case in the course of this week. |
@Stefan-Franck But isn't it just a workaround you are proposing, and potentially not even addressing the real problem? In your case you would need a set of "web-optimized" renditions with a special emphasis on size; and even a rendition of the correct dimension is present a special size-optimized rendition should be created. (In my opinion the AdaptiveImageServlet is not helping the GreenWeb initiative either ...) |
But where does that special web-optimized rendition come from? Sounds to me like a manual process for thousands of assets that could be optimized on delivery. If somebody has such a process in place, they can use it. But if not, enforcing compression for all compressable assets is a much more reliable approach. Furthermore, the compressed assets should be cached for a rather long time, thus the rendering cost of the AdaptiveImageServlet shouldn't happen in most cases. |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
@joerghoh added three test cases:
|
@joerghoh - As you can see from the test cases, the full-quality Adobe logo has 250kB, the compressed one only has 130kB, already a reduction by 50 % with the default settings (compression rate around .82). We sometimes use compression factors of .5 - .6, so it really reduces the page weight considerably if the compression is applied or not. |
@Stefan-Franck thanks for the feedback, I will feed it back to our assets team. While I don't have any data to contradict your observation, I find that you should be able to configure the processing profiles in a way, that you get a size-optimized rendition. There should not be any need to use a process like this to generate the renditions on the flight. |
Also an interesting approach, and then have the image component including that compressed rendition instead of the original one - that would work well. So we could decide the time the load occurs from on-upload (how about bulk uploads when the new design agency drops another 1.000 images) to on-delivery (how about many parallel requests and generally longer TTFBs during delivery). |
Currently the algorithm in Line 518 in 68e429d
|
@kwin In the my ideal world you would not need to use the AdaptiveImageServlet, because the assets you are delivering are available already with the required dimensions and quality. |
I am trying to understand the issue. The problem is that the renditions we produce during asset processing are not as efficient as what the Or is the original asset not as efficient as it can be? |
@bobvanmanen - the AdaptiveImageServlet takes - if possible - the original asset from the DAM. As it is spooling the original directly, no compression is applied. This leads to AEM delivering unnecessarily large, uncompressed images via this servlet. At my current customer, editors actually crop all images by 1 pixel just to ensure the compression is applied. |
@joerghoh @bobvanmanen - any thoughts? |
…nsformations-by-file-type
Quality Gate passedIssues Measures |
Ensures users can configure which image types are never transformed (currently: svg and gif) and which image types are always transformed (currently: none) via OSGi.
By this, transformation can be easily disabled for other image types, but it is also possible to require the transformation for jpg/jpeg, thus ensuring custom compression settings are used - currently, editors in AEM have to crop jpg images by a single pixel just to deliver them in a compressed way - otherwise, web performance and green web goals are not met.
The default configurations for these settings ensure backward compatibility, as they ensure the system behaves as before.