test.gz is a 10 million line file with sequential numbers.

Caveat: testing machine is dual core, which means running zcat in a subprocess is a bit faster than it would've been on a single core machine.

% time zcat test.gz > /dev/null
0.76s user 0.00s system 99% cpu 0.771 total

% time python2.6 -c 'import os; print all(line for line in os.popen("zcat test.gz"))'
True
2.44s user 0.02s system 156% cpu 1.574 total

% time python2.6 -c 'import gzip; print all(line for line in gzip.open("test.gz"))'
True
72.67s user 0.02s system 99% cpu 1:12.78 total

% time python2.7 -c 'import gzip; print all(line for line in gzip.open("test.gz"))'
True
28.49s user 0.02s system 99% cpu 28.543 total

% time python2.7 -c 'import gzip, io; print all(line for line in io.BufferedReader(gzip.open("test.gz")))'
True
6.54s user 0.01s system 99% cpu 6.563 total

% time python3.2 -c 'import gzip; print(all(line for line in gzip.open("test.gz")))'
True
18.41s user 0.02s system 99% cpu 18.448 total

% time python3.2 -c 'import gzip, io; print(all(line for line in io.BufferedReader(gzip.open("test.gz"))))'
True
4.49s user 0.02s system 99% cpu 4.519 total

Conclusion: python 3.2’s gzip module is faster at decompression than any of the other versions. But os.popen(zcat) is still way way faster than any of the gzip modules by themselves, so you should use that if you are reading very large gzipped files in version 2.6 or below. However, versions 2.7 and 3.2 introduced the ability to wrap an opened gzip file in io.BufferedReader(), which performs almost as well as shelling out to zcat.