Oh dear!
Compression, in a nutshell, relies on being able to spot patterns in data. So instead of "000000000" the zip might say "10x'0'". That's a trivial and somewhat inaccurate example, but it conveys the gist of what's going on here.
Most zip formats seem to create a file that consists of (1) a lookup table/key (2) the compressed data. Pick a chunk of data from (2) and use (1) to work out what it should be. Clear as mud?
So when you recompress an already compressed file, you're adding another lookup table to data that has virtually no repeating patterns. i.e. the data can't be compressed any further, and you're actually increasing the file size by adding another lookup table. You can prove this by recursively zipping a file. Eventually it would become larger than the original.
The quality of a zip algorithm is reflected in the compression ratio. My example above would actually work very badly. But maths whizzkids have come up with much cleverer ways of achieving this objective. 7-Zip and BZip are two of the most effective zip algorithms to come into the public domain in recent years.
Executables and JPEGs will compress very badly, because they have already been compressed. JPEG is a lossy compression format; executables are often compressed after compilation to reduce file sizes and to make it harder to reverse-engineer the code.