Basically, there is no real limit as the more high level computations, that could take a very long time, we include, the higher lossless data compression we can achieve.However, these higher level computations are not worth the effort in general. I am talking about very advanced things like malfunction 3 D models of actors for movies and www.bomengids.nl European them and store their position and pose in movies.
What does work is more easy straightforward computations like moving the objects in movies in the way the camera movies and interference only the error pixels, those are the pixels that were not predicted in the right way and the new pixels that come from out of the image into th E image.
The sky is the limit, but experience and practice teaches us that movies are already stored in a very efficient way as one can calculate easily how much mega bytes it takes to store a movie without compression with
H * W (height * width) pixels in a shot
N Shots per seconds
s seconds Movie length
256 colors = > 8 bit = 1 byte per pixel
Now if we take
480 x 640 pixels e.g.
n = 25
Movie length is 2 hours = 120 minutes = 7200 seconds = > s = 7200
So we have 480 x 640 x 25 x 7200 = 55,296,000,000 = 52,734 megabytes.
(1 kB = 1024 bytes and 1 MB = 1024 kB = 1024 虏 bytes)
As normally this kind of movie takes only around 500 MB storage, we have already a compression factor of 100 these days for this kind of movies.That is a huge compression.
There are several reasons why we can have such a huge lossless compression:
- neighboring pixels often have similar colors
- The successive shots are very much alike and when the camera moves fast, we can calculate a predicted shot, considering the direction in which the camera moves, and store only the pixels that differ from this prediction
- It is possible to detect patterns, like stripes
But these are pretty much all low level constructs to compression.The more pictures and movies that a program analyses and sees, the more it can compress in general and in theory, by having models of cities or actors in memory. These kind of compressions are not very fruitful though and could yield only a minor improvement as cities change, actors have a large number of mixes of facial expressions and they get older also,…
So There is not an upper limit.We have several lower limits though, like with the calculation of entropy in lossless compression.
The formula for entropy is
F = (-p_1 log (p_1)-p_2 log (p_2)-芒 鈧?娄-p_n log (P_n))/log (n)
P_i is the frequency that color I occurs
For Black and white images we have
f = (-P log (p)-(1-P) log (1-P))/log (2)
With p = Percentage of black pixels
So a thin line drawing with more white background, will yield us a lower bound in compression, e.g. P = 0.1 = > f = 0469 = 46.9%, so the image can be compressed to less than half of its orignal size without losing information (lossless) with the known compression extrapolation HMS Like Huffman coding.
For Color Images We can reach higher compressions with the entropy formula.Note though that the entropy formula keeps only account of the number of pixels with a certain color and does not exploit the spatial information that the pixels with the same color tones are often near each other. So We can do much better for color pictures with low level compression.
For Videos, we can predict the next shot as mentioned above, and store only the error pixels, so the entropy formula will give a low value.
With High Level compression, we could still do better than the current compress algorithms that compress the movies, that one can download on Internet, but this requires a lot of calculation time and strong processors and the yield is often not worth the Investment in CPU time.Moreover, we need to compress a whole library of similar objects then as one object (one picture or one movie) does often not contain enough information to make a model of the objects in it.