# Generate repeatable random Numpy image with N unique colours Here is an example: #!/usr/bin/env python3 Note that this does not mean your shade of blue will always map to the same palette index in every image, it just means all pixels with your shade of blue in any one given image will all have the same palette index. Im256c = image.quantize(colors=256, method=2) One way you can avoid that, is to quantise the image down to less than 256 colours, then there will be no errors to diffuse. So, if your colour gets caught up in that, it will sometimes come out differently. So, PIL accumulates the errors (difference between original colour and palettised colour) and every now and then inserts a pixel of a slightly different colour so that the image looks more or less correct from a distance. Basically, if you have more than the 256 colours (maximum a palette can hold) in your image, there are necessarily colours in the image that do not occur in the palette. The issue is that PIL/Pillow is "dithering". For example, if you view the pixel data of a Greyscale image ( L), the values would look the same like in the case of paletted mode, but would actually correspond to true color values (or shades of grey), rather then a index. So, you may interpret them as a index, which upon reading an image gets converted to the actual color value stored in that index.īut these pixel values, need not always mean the image is of color mode P. Ĭorresponds to the index, in the mapping table, rather then the actual color value itself. So, the numpy array (or a regular list) having values like this:. This same process goes through each unique pixel value in the original image (but the table size should not exceed 256, in the process of mapping). For example, RGB color value (0, 0, 255) ( Pure Blue) in an image gets an index 1 ( just an hypothetical example). The way Palette mode works is it creates a mapping table, which corresponds a index (in range 0 - 255) to a discrete color in larger color space (like RGB). This is a normal behavior shown, when we convert a Image into P color mode. It seems PIL will perform dithering no matter what. The 8-bit image in mode 'P' contains 125 unique palette indexes. Print(' %s %d' % ("Number of unique colors: ", np.unique(a2).shape))ĭiagonal pixels' values printed. # Look at diagonals - should all be the same Unique_colors = np.unique(n.reshape(-1, n.shape), axis=0).shape # Make Numpy image into PIL Image, palettise, convert back to Numpy array and check diagonals # Intentionally set all diagonal elements same shade of blue N = np.random.randint(N, size=(h, w, 3), dtype=np.uint8) # Generate repeatable random Numpy image with N^3 unique colours at most Implemented a little something following Mark's examples import numpy as np Can someone please explain why this happens? Thanks in advance. I know pixels are converted into palette index, but this still doesn't make sense to me. I expect that pixels with same RGB values are converted in the same way. I transferred an example image into a numpy array and the original 24-bit png file has values like this:Īfter using the convert function with mode 'P', the images value became like this:Ĩ-bit array. However, after using mode 'P' as the argument, I found out that pixels with same RGB values can be converted differently. I used PIL's nvert() method for solving this problem. I have a set of 24-bit png files and I want to transform them into 8-bit png files.
0 Comments
Leave a Reply. |