You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Alternately: units of "points" are useless without resolution. Or: "Toy" API fontsize() broken for textcurvecentered().)
The docs say fontsize() works in points, but the actual behavior seems to be in pixels. If you only ever look at your results on a ~72ppi screen at full resolution, you'll never notice the difference, because there isn't any. But if you're creating graphics at higher resolution, say for print purposes, the difference is obvious.
Luxor works natively in pixel coordinates and sizes, which makes total sense. Used in this way, setfont() for the Pro API works perfectly. If I want text at some point size S, all I have to do is know my resolution (e.g. dpi=300) and call setfont("Georgia", dpi*S/72.0). This works, because I know what value dpi has, and can therefore convert correctly from my desired point size into Luxor's pixel-space.
But fontsize() wants input in points, which screws you over for textcurvecentered() because it doesn't know my dpi. Which means that internally, whatever conversion from points-to-pixels it's using cannot be guaranteed to be in sync with my resolution.
(And since there's no pro version of textcurvecentered(), and since the pro and toy APIs don't share font settings, I'm stuck using fontsize() unless I want to roll my own textcurvecentered(), which I really don't.)
Some quick interrogation with get_fontsize() is revealing:
fontface("Georgia")
fontsize(16)
println("y-scale is: $(get_fontsize())")
julia> y-scale is: 16
Getting back a y-scale that's exactly equal to the given point size means that fontsize() is implicitly assuming 72 dpi, or a 1:1 correspondence between points and pixels, as we can see in text.jl line 128, where it passes the input point size directly to Cairo.
The docs say that fontsize() wants input in points. Yet, it treats your input as a pixel size. A "point" is a unit with physical dimension in the real world: 1 point = 1/72.0 inches. In my drawing, that means 1 point = 300/72.0 = 4.15 pixels. Yet if I call fontsize(16) and then get_fontsize(), I don't get a y-scale of 16*4.15 = 66.4. I get a y-scale of 16. Indeed, how could it give me anything else, since it doesn't know my dpi? But, the proof's in the pixels, and when I measure how many pixels tall some actual text comes out to be, the results are consistent with an internal 72dpi assumption.
I think the simplest (and possibly also the best) thing to do here is to change the docs to say that fontsize() works in pixels, not points, which seems to be its current behavior. This means no code changes, and also makes fontsize() work in a consistent manner with setfont() and with the rest of Luxor's pixel-based measurements.
But if it simply must use points, IMO the easiest code fix would be adding a fontsize() method that takes both the point size and the target resolution. It can then scale the input value appropriately when calling Cairo.set_font_size(). (In my drawing, multiplying my point sizes by a fudge-factor of 300/72.0 in calls to fontsize() does indeed generate text whose sizes match the sizes I get in the pro API.)
But a cleaner solution would be for Drawings to actually be aware of their resolution, not just their pixel dimensions. Add a dpi field to the Drawing struct, (with some sensible default like 100, which is a good fit to most modern laptop/desktop screens), and then anywhere else in the API where you need to convert between physical and pixel spaces, there's one, consistent source of truth for doing it.
The text was updated successfully, but these errors were encountered:
(Alternately: units of "points" are useless without resolution. Or: "Toy" API fontsize() broken for textcurvecentered().)
The docs say fontsize() works in points, but the actual behavior seems to be in pixels. If you only ever look at your results on a ~72ppi screen at full resolution, you'll never notice the difference, because there isn't any. But if you're creating graphics at higher resolution, say for print purposes, the difference is obvious.
Luxor works natively in pixel coordinates and sizes, which makes total sense. Used in this way, setfont() for the Pro API works perfectly. If I want text at some point size S, all I have to do is know my resolution (e.g. dpi=300) and call setfont("Georgia", dpi*S/72.0). This works, because I know what value dpi has, and can therefore convert correctly from my desired point size into Luxor's pixel-space.
But fontsize() wants input in points, which screws you over for textcurvecentered() because it doesn't know my dpi. Which means that internally, whatever conversion from points-to-pixels it's using cannot be guaranteed to be in sync with my resolution.
(And since there's no pro version of textcurvecentered(), and since the pro and toy APIs don't share font settings, I'm stuck using fontsize() unless I want to roll my own textcurvecentered(), which I really don't.)
Some quick interrogation with get_fontsize() is revealing:
fontface("Georgia")
fontsize(16)
println("y-scale is: $(get_fontsize())")
julia> y-scale is: 16
Getting back a y-scale that's exactly equal to the given point size means that fontsize() is implicitly assuming 72 dpi, or a 1:1 correspondence between points and pixels, as we can see in text.jl line 128, where it passes the input point size directly to Cairo.
The docs say that fontsize() wants input in points. Yet, it treats your input as a pixel size. A "point" is a unit with physical dimension in the real world: 1 point = 1/72.0 inches. In my drawing, that means 1 point = 300/72.0 = 4.15 pixels. Yet if I call fontsize(16) and then get_fontsize(), I don't get a y-scale of 16*4.15 = 66.4. I get a y-scale of 16. Indeed, how could it give me anything else, since it doesn't know my dpi? But, the proof's in the pixels, and when I measure how many pixels tall some actual text comes out to be, the results are consistent with an internal 72dpi assumption.
I think the simplest (and possibly also the best) thing to do here is to change the docs to say that fontsize() works in pixels, not points, which seems to be its current behavior. This means no code changes, and also makes fontsize() work in a consistent manner with setfont() and with the rest of Luxor's pixel-based measurements.
But if it simply must use points, IMO the easiest code fix would be adding a fontsize() method that takes both the point size and the target resolution. It can then scale the input value appropriately when calling Cairo.set_font_size(). (In my drawing, multiplying my point sizes by a fudge-factor of 300/72.0 in calls to fontsize() does indeed generate text whose sizes match the sizes I get in the pro API.)
But a cleaner solution would be for Drawings to actually be aware of their resolution, not just their pixel dimensions. Add a dpi field to the Drawing struct, (with some sensible default like 100, which is a good fit to most modern laptop/desktop screens), and then anywhere else in the API where you need to convert between physical and pixel spaces, there's one, consistent source of truth for doing it.
The text was updated successfully, but these errors were encountered: