Replies: 7 comments 3 replies
-
(to be clear, the request is vague on purpose, and I am not asking anyone to write any features - I will be coding these ideas on my own. This is for getting feedback on the ideas presented, and establishing short and long term goals) |
Beta Was this translation helpful? Give feedback.
-
Also, I could start by tackling #6590 - since that issue was not closed, despite an existing workaround for the documented problem. The implementation of layers accessed through But - in the meantime, if one have an idea of another fix to that issue (maybe a method call to force RGBA conversion even for the first layer), I am all ears. |
Beta Was this translation helpful? Give feedback.
-
Rather than creating a new method of accessing frames by using index notation, I would suggest adding a new Can I ask, why are you interested in reading the image in this way? The resulting images will not be what the creator of the file intended you to see.
Note that it is already possible to specify the animation delay for each frame when saving. https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#saving
So im.save("out.gif", save_all=True, duration=(1000, 1500, 2000) As for #6590, I'm not following what you've said. There isn't an existing workaround. PR #6592 should resolve the issue by allowing GifImagePlugin.LOADING_STRATEGY = GifImagePlugin.LoadingStrategy.RGB_ALWAYS to change the first layer to load as RGBA if there is transparency in it. The issue will be closed if that PR is merged. If you consider that to be a workaround, then ok. If you are looking to try and fix #6590 without applying that setting, understand that the problem does not result from only using Pillow. The user is converting from a P image from Pillow to a 2D image in NumPy. The failure happens because the palette (effectively a dictionary converting indexes to RGB colors) is not a part of the 2D array of palette indexes that NumPy receives, and so when the image is converted back to Pillow, the palette is missing. The only way around that would be to silently convert the image from P to RGB when converting to NumPy, which I think would be unexpected behaviour for users in other situations. You mention a "colorkey" setting to describe the transparency index. I think we already have that, as |
Beta Was this translation helpful? Give feedback.
-
Thank you for the reply.
TBH, I think the current approach is quite messy. To start with we are talking about a global setting to load an image. Maybe, instead of focusing on the GIF format from the start I could make an AnimatedImage base class, and add changes to the GIF saving code to support it?
I am sorry - but getting the actual pixels in a frame, in order to create or edit a new GIF it seems quite natural that each frame should be seems "as recorded". The rendered state is interesting only at display time. Maybe that is what you had had experiences with, and it possibly is what is desirable when reading a GIF into a NumPy array for data processing (as in "let's see what the user has on screen in each frame). But it certainly is of no help to creators and artists wanting to modify or create a new image. But yes, reading the frames "as rendered" should also be possible in an eventual new interface for frames. Maybe create an "AnimatedImage" base class, initially not dependent on anything GIF specific, but which can be saved as a GIF file is better? (And future might include other animated-image formats, like .mng and even some video codecs). After opening the issue I dig around in the code and found the support for individual frame duration on save - and I think having O.O. support for that would be nicer than requiring the user to keep separate lists for frame images and frame properties, and only being able to put everything together at save time. An AnimatedImage base image could hold state for not only a frame sequence, but one could use ".append" and ".insert" methods to carefully build an asset, and then just dump it to disk with a ".save" call. As for the conversions - yes, you are right. They should be kept to a minimum, and preferentially always be explicit. The new frame API I am proposing could even hold an independent palette for each frame, and serve each as a "P" image, while keeping the current behavior for the existing methods. I will work a bit on it, and then get back with some more concrete ideas. |
Beta Was this translation helpful? Give feedback.
-
Instead of a new base class, what about an animated image builder class completely separate from the current image classes? Though an animated image base class could be useful as well. |
Beta Was this translation helpful? Give feedback.
-
I got some working code that simply adds a While it works perfectly, and is handy for the use cases I had in mind, I have to agree with @radarhere's first feedback in which it feels too different to what PIL provides - i.e.: for each loaded frame, one gets the image as it would be rendered. Besides that, to be able to get an indexed version of the frames past the first, I had to hack the internal state of the instance (directly setting .mode to If anyone is interested at this point, the code is here: https://github.com/jsbueno/Pillow/tree/feature/getitem_support_to_gifimagefile |
Beta Was this translation helpful? Give feedback.
-
[OT] is there a place were maintainers and contributors chat about Pillow? (what used to be done through mailing-lists/irc-channels in the days of old) |
Beta Was this translation helpful? Give feedback.
-
Hi -
I intend to start contributing something to the GIF code in PIL, in order to have "more Pythonic" support for animations.
I am opening this as a communication channel so that I don't waste time in solutions that would not be acceptable to the project,
and to function as an umbrella issue for smaller PRs I can come up with.
One first idea would be to be able to work with animations by using the index-notation in animated images, by implementing
__getitem__
in GifImageFile -instead of having to go through the existing "seek" and "tell" methods - so one can do: img[1] to get the first layer
as a first class Image, instead of using the current
seek
andtell
methods. Does that sound reasonable?I'd do that in order one could get the actual pixels for the frame: as it is now, with "seek" one gets the composed
image from frame 0 up to the current frame, and it is not possible to get the actual frame pixels in PIL.
Using the different approach would left
seek
untouched, so that any code relying in the current behavior remains working.Also, if there is some "roadmap" like document for animation in PIL, I'd like to be pointed to it.
Another feature would be to make the img.info namespace writable and consistent to the active frame, so that it becomes possible to change the animation delay (and other parameters) for each frame when saving the file.
Beta Was this translation helpful? Give feedback.
All reactions