ghash-x86.pl revision 246772
1238384Sjkim#!/usr/bin/env perl
2238384Sjkim#
3238384Sjkim# ====================================================================
4238384Sjkim# Written by Andy Polyakov <appro@openssl.org> for the OpenSSL
5238384Sjkim# project. The module is, however, dual licensed under OpenSSL and
6238384Sjkim# CRYPTOGAMS licenses depending on where you obtain it. For further
7238384Sjkim# details see http://www.openssl.org/~appro/cryptogams/.
8238384Sjkim# ====================================================================
9238384Sjkim#
10238384Sjkim# March, May, June 2010
11238384Sjkim#
12238384Sjkim# The module implements "4-bit" GCM GHASH function and underlying
13238384Sjkim# single multiplication operation in GF(2^128). "4-bit" means that it
14238384Sjkim# uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
15238384Sjkim# code paths: vanilla x86 and vanilla MMX. Former will be executed on
16238384Sjkim# 486 and Pentium, latter on all others. MMX GHASH features so called
17238384Sjkim# "528B" variant of "4-bit" method utilizing additional 256+16 bytes
18238384Sjkim# of per-key storage [+512 bytes shared table]. Performance results
19238384Sjkim# are for streamed GHASH subroutine and are expressed in cycles per
20238384Sjkim# processed byte, less is better:
21238384Sjkim#
22238384Sjkim#		gcc 2.95.3(*)	MMX assembler	x86 assembler
23238384Sjkim#
24238384Sjkim# Pentium	105/111(**)	-		50
25238384Sjkim# PIII		68 /75		12.2		24
26238384Sjkim# P4		125/125		17.8		84(***)
27238384Sjkim# Opteron	66 /70		10.1		30
28238384Sjkim# Core2		54 /67		8.4		18
29238384Sjkim#
30238384Sjkim# (*)	gcc 3.4.x was observed to generate few percent slower code,
31238384Sjkim#	which is one of reasons why 2.95.3 results were chosen,
32238384Sjkim#	another reason is lack of 3.4.x results for older CPUs;
33238384Sjkim#	comparison with MMX results is not completely fair, because C
34238384Sjkim#	results are for vanilla "256B" implementation, while
35238384Sjkim#	assembler results are for "528B";-)
36238384Sjkim# (**)	second number is result for code compiled with -fPIC flag,
37238384Sjkim#	which is actually more relevant, because assembler code is
38238384Sjkim#	position-independent;
39238384Sjkim# (***)	see comment in non-MMX routine for further details;
40238384Sjkim#
41238384Sjkim# To summarize, it's >2-5 times faster than gcc-generated code. To
42238384Sjkim# anchor it to something else SHA1 assembler processes one byte in
43238384Sjkim# 11-13 cycles on contemporary x86 cores. As for choice of MMX in
44238384Sjkim# particular, see comment at the end of the file...
45238384Sjkim
46238384Sjkim# May 2010
47238384Sjkim#
48238384Sjkim# Add PCLMULQDQ version performing at 2.10 cycles per processed byte.
49238384Sjkim# The question is how close is it to theoretical limit? The pclmulqdq
50238384Sjkim# instruction latency appears to be 14 cycles and there can't be more
51238384Sjkim# than 2 of them executing at any given time. This means that single
52238384Sjkim# Karatsuba multiplication would take 28 cycles *plus* few cycles for
53238384Sjkim# pre- and post-processing. Then multiplication has to be followed by
54238384Sjkim# modulo-reduction. Given that aggregated reduction method [see
55238384Sjkim# "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
56238384Sjkim# white paper by Intel] allows you to perform reduction only once in
57238384Sjkim# a while we can assume that asymptotic performance can be estimated
58238384Sjkim# as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
59238384Sjkim# and Naggr is the aggregation factor.
60238384Sjkim#
61238384Sjkim# Before we proceed to this implementation let's have closer look at
62238384Sjkim# the best-performing code suggested by Intel in their white paper.
63238384Sjkim# By tracing inter-register dependencies Tmod is estimated as ~19
64238384Sjkim# cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per
65238384Sjkim# processed byte. As implied, this is quite optimistic estimate,
66238384Sjkim# because it does not account for Karatsuba pre- and post-processing,
67238384Sjkim# which for a single multiplication is ~5 cycles. Unfortunately Intel
68238384Sjkim# does not provide performance data for GHASH alone. But benchmarking
69238384Sjkim# AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt
70238384Sjkim# alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that
71238384Sjkim# the result accounts even for pre-computing of degrees of the hash
72238384Sjkim# key H, but its portion is negligible at 16KB buffer size.
73238384Sjkim#
74238384Sjkim# Moving on to the implementation in question. Tmod is estimated as
75238384Sjkim# ~13 cycles and Naggr is 2, giving asymptotic performance of ...
76238384Sjkim# 2.16. How is it possible that measured performance is better than
77238384Sjkim# optimistic theoretical estimate? There is one thing Intel failed
78238384Sjkim# to recognize. By serializing GHASH with CTR in same subroutine
79238384Sjkim# former's performance is really limited to above (Tmul + Tmod/Naggr)
80238384Sjkim# equation. But if GHASH procedure is detached, the modulo-reduction
81238384Sjkim# can be interleaved with Naggr-1 multiplications at instruction level
82238384Sjkim# and under ideal conditions even disappear from the equation. So that
83238384Sjkim# optimistic theoretical estimate for this implementation is ...
84238384Sjkim# 28/16=1.75, and not 2.16. Well, it's probably way too optimistic,
85238384Sjkim# at least for such small Naggr. I'd argue that (28+Tproc/Naggr),
86238384Sjkim# where Tproc is time required for Karatsuba pre- and post-processing,
87238384Sjkim# is more realistic estimate. In this case it gives ... 1.91 cycles.
88238384Sjkim# Or in other words, depending on how well we can interleave reduction
89238384Sjkim# and one of the two multiplications the performance should be betwen
90238384Sjkim# 1.91 and 2.16. As already mentioned, this implementation processes
91238384Sjkim# one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart
92238384Sjkim# - in 2.02. x86_64 performance is better, because larger register
93238384Sjkim# bank allows to interleave reduction and multiplication better.
94238384Sjkim#
95238384Sjkim# Does it make sense to increase Naggr? To start with it's virtually
96238384Sjkim# impossible in 32-bit mode, because of limited register bank
97238384Sjkim# capacity. Otherwise improvement has to be weighed agiainst slower
98238384Sjkim# setup, as well as code size and complexity increase. As even
99238384Sjkim# optimistic estimate doesn't promise 30% performance improvement,
100238384Sjkim# there are currently no plans to increase Naggr.
101238384Sjkim#
102238384Sjkim# Special thanks to David Woodhouse <dwmw2@infradead.org> for
103238384Sjkim# providing access to a Westmere-based system on behalf of Intel
104238384Sjkim# Open Source Technology Centre.
105238384Sjkim
106238384Sjkim# January 2010
107238384Sjkim#
108238384Sjkim# Tweaked to optimize transitions between integer and FP operations
109238384Sjkim# on same XMM register, PCLMULQDQ subroutine was measured to process
110238384Sjkim# one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere.
111238384Sjkim# The minor regression on Westmere is outweighed by ~15% improvement
112238384Sjkim# on Sandy Bridge. Strangely enough attempt to modify 64-bit code in
113238384Sjkim# similar manner resulted in almost 20% degradation on Sandy Bridge,
114238384Sjkim# where original 64-bit code processes one byte in 1.95 cycles.
115238384Sjkim
116238384Sjkim$0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
117238384Sjkimpush(@INC,"${dir}","${dir}../../perlasm");
118238384Sjkimrequire "x86asm.pl";
119238384Sjkim
120238384Sjkim&asm_init($ARGV[0],"ghash-x86.pl",$x86only = $ARGV[$#ARGV] eq "386");
121238384Sjkim
122238384Sjkim$sse2=0;
123238384Sjkimfor (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
124238384Sjkim
125238384Sjkim($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
126238384Sjkim$inp  = "edi";
127238384Sjkim$Htbl = "esi";
128238384Sjkim
129238384Sjkim$unroll = 0;	# Affects x86 loop. Folded loop performs ~7% worse
130238384Sjkim		# than unrolled, which has to be weighted against
131238384Sjkim		# 2.5x x86-specific code size reduction.
132238384Sjkim
133238384Sjkimsub x86_loop {
134238384Sjkim    my $off = shift;
135238384Sjkim    my $rem = "eax";
136238384Sjkim
137238384Sjkim	&mov	($Zhh,&DWP(4,$Htbl,$Zll));
138238384Sjkim	&mov	($Zhl,&DWP(0,$Htbl,$Zll));
139238384Sjkim	&mov	($Zlh,&DWP(12,$Htbl,$Zll));
140238384Sjkim	&mov	($Zll,&DWP(8,$Htbl,$Zll));
141238384Sjkim	&xor	($rem,$rem);	# avoid partial register stalls on PIII
142238384Sjkim
143238384Sjkim	# shrd practically kills P4, 2.5x deterioration, but P4 has
144238384Sjkim	# MMX code-path to execute. shrd runs tad faster [than twice
145238384Sjkim	# the shifts, move's and or's] on pre-MMX Pentium (as well as
146238384Sjkim	# PIII and Core2), *but* minimizes code size, spares register
147238384Sjkim	# and thus allows to fold the loop...
148238384Sjkim	if (!$unroll) {
149238384Sjkim	my $cnt = $inp;
150238384Sjkim	&mov	($cnt,15);
151238384Sjkim	&jmp	(&label("x86_loop"));
152238384Sjkim	&set_label("x86_loop",16);
153238384Sjkim	    for($i=1;$i<=2;$i++) {
154238384Sjkim		&mov	(&LB($rem),&LB($Zll));
155238384Sjkim		&shrd	($Zll,$Zlh,4);
156238384Sjkim		&and	(&LB($rem),0xf);
157238384Sjkim		&shrd	($Zlh,$Zhl,4);
158238384Sjkim		&shrd	($Zhl,$Zhh,4);
159238384Sjkim		&shr	($Zhh,4);
160238384Sjkim		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
161238384Sjkim
162238384Sjkim		&mov	(&LB($rem),&BP($off,"esp",$cnt));
163238384Sjkim		if ($i&1) {
164238384Sjkim			&and	(&LB($rem),0xf0);
165238384Sjkim		} else {
166238384Sjkim			&shl	(&LB($rem),4);
167238384Sjkim		}
168238384Sjkim
169238384Sjkim		&xor	($Zll,&DWP(8,$Htbl,$rem));
170238384Sjkim		&xor	($Zlh,&DWP(12,$Htbl,$rem));
171238384Sjkim		&xor	($Zhl,&DWP(0,$Htbl,$rem));
172238384Sjkim		&xor	($Zhh,&DWP(4,$Htbl,$rem));
173238384Sjkim
174238384Sjkim		if ($i&1) {
175238384Sjkim			&dec	($cnt);
176238384Sjkim			&js	(&label("x86_break"));
177238384Sjkim		} else {
178238384Sjkim			&jmp	(&label("x86_loop"));
179238384Sjkim		}
180238384Sjkim	    }
181238384Sjkim	&set_label("x86_break",16);
182238384Sjkim	} else {
183238384Sjkim	    for($i=1;$i<32;$i++) {
184238384Sjkim		&comment($i);
185238384Sjkim		&mov	(&LB($rem),&LB($Zll));
186238384Sjkim		&shrd	($Zll,$Zlh,4);
187238384Sjkim		&and	(&LB($rem),0xf);
188238384Sjkim		&shrd	($Zlh,$Zhl,4);
189238384Sjkim		&shrd	($Zhl,$Zhh,4);
190238384Sjkim		&shr	($Zhh,4);
191238384Sjkim		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
192238384Sjkim
193238384Sjkim		if ($i&1) {
194238384Sjkim			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
195238384Sjkim			&and	(&LB($rem),0xf0);
196238384Sjkim		} else {
197238384Sjkim			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
198238384Sjkim			&shl	(&LB($rem),4);
199238384Sjkim		}
200238384Sjkim
201238384Sjkim		&xor	($Zll,&DWP(8,$Htbl,$rem));
202238384Sjkim		&xor	($Zlh,&DWP(12,$Htbl,$rem));
203238384Sjkim		&xor	($Zhl,&DWP(0,$Htbl,$rem));
204238384Sjkim		&xor	($Zhh,&DWP(4,$Htbl,$rem));
205238384Sjkim	    }
206238384Sjkim	}
207238384Sjkim	&bswap	($Zll);
208238384Sjkim	&bswap	($Zlh);
209238384Sjkim	&bswap	($Zhl);
210238384Sjkim	if (!$x86only) {
211238384Sjkim		&bswap	($Zhh);
212238384Sjkim	} else {
213238384Sjkim		&mov	("eax",$Zhh);
214238384Sjkim		&bswap	("eax");
215238384Sjkim		&mov	($Zhh,"eax");
216238384Sjkim	}
217238384Sjkim}
218238384Sjkim
219238384Sjkimif ($unroll) {
220238384Sjkim    &function_begin_B("_x86_gmult_4bit_inner");
221238384Sjkim	&x86_loop(4);
222238384Sjkim	&ret	();
223238384Sjkim    &function_end_B("_x86_gmult_4bit_inner");
224238384Sjkim}
225238384Sjkim
226238384Sjkimsub deposit_rem_4bit {
227238384Sjkim    my $bias = shift;
228238384Sjkim
229238384Sjkim	&mov	(&DWP($bias+0, "esp"),0x0000<<16);
230238384Sjkim	&mov	(&DWP($bias+4, "esp"),0x1C20<<16);
231238384Sjkim	&mov	(&DWP($bias+8, "esp"),0x3840<<16);
232238384Sjkim	&mov	(&DWP($bias+12,"esp"),0x2460<<16);
233238384Sjkim	&mov	(&DWP($bias+16,"esp"),0x7080<<16);
234238384Sjkim	&mov	(&DWP($bias+20,"esp"),0x6CA0<<16);
235238384Sjkim	&mov	(&DWP($bias+24,"esp"),0x48C0<<16);
236238384Sjkim	&mov	(&DWP($bias+28,"esp"),0x54E0<<16);
237238384Sjkim	&mov	(&DWP($bias+32,"esp"),0xE100<<16);
238238384Sjkim	&mov	(&DWP($bias+36,"esp"),0xFD20<<16);
239238384Sjkim	&mov	(&DWP($bias+40,"esp"),0xD940<<16);
240238384Sjkim	&mov	(&DWP($bias+44,"esp"),0xC560<<16);
241238384Sjkim	&mov	(&DWP($bias+48,"esp"),0x9180<<16);
242238384Sjkim	&mov	(&DWP($bias+52,"esp"),0x8DA0<<16);
243238384Sjkim	&mov	(&DWP($bias+56,"esp"),0xA9C0<<16);
244238384Sjkim	&mov	(&DWP($bias+60,"esp"),0xB5E0<<16);
245238384Sjkim}
246238384Sjkim
247238384Sjkim$suffix = $x86only ? "" : "_x86";
248238384Sjkim
249238384Sjkim&function_begin("gcm_gmult_4bit".$suffix);
250238384Sjkim	&stack_push(16+4+1);			# +1 for stack alignment
251238384Sjkim	&mov	($inp,&wparam(0));		# load Xi
252238384Sjkim	&mov	($Htbl,&wparam(1));		# load Htable
253238384Sjkim
254238384Sjkim	&mov	($Zhh,&DWP(0,$inp));		# load Xi[16]
255238384Sjkim	&mov	($Zhl,&DWP(4,$inp));
256238384Sjkim	&mov	($Zlh,&DWP(8,$inp));
257238384Sjkim	&mov	($Zll,&DWP(12,$inp));
258238384Sjkim
259238384Sjkim	&deposit_rem_4bit(16);
260238384Sjkim
261238384Sjkim	&mov	(&DWP(0,"esp"),$Zhh);		# copy Xi[16] on stack
262238384Sjkim	&mov	(&DWP(4,"esp"),$Zhl);
263238384Sjkim	&mov	(&DWP(8,"esp"),$Zlh);
264238384Sjkim	&mov	(&DWP(12,"esp"),$Zll);
265238384Sjkim	&shr	($Zll,20);
266238384Sjkim	&and	($Zll,0xf0);
267238384Sjkim
268238384Sjkim	if ($unroll) {
269238384Sjkim		&call	("_x86_gmult_4bit_inner");
270238384Sjkim	} else {
271238384Sjkim		&x86_loop(0);
272238384Sjkim		&mov	($inp,&wparam(0));
273238384Sjkim	}
274238384Sjkim
275238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
276238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
277238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
278238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
279238384Sjkim	&stack_pop(16+4+1);
280238384Sjkim&function_end("gcm_gmult_4bit".$suffix);
281238384Sjkim
282238384Sjkim&function_begin("gcm_ghash_4bit".$suffix);
283238384Sjkim	&stack_push(16+4+1);			# +1 for 64-bit alignment
284238384Sjkim	&mov	($Zll,&wparam(0));		# load Xi
285238384Sjkim	&mov	($Htbl,&wparam(1));		# load Htable
286238384Sjkim	&mov	($inp,&wparam(2));		# load in
287238384Sjkim	&mov	("ecx",&wparam(3));		# load len
288238384Sjkim	&add	("ecx",$inp);
289238384Sjkim	&mov	(&wparam(3),"ecx");
290238384Sjkim
291238384Sjkim	&mov	($Zhh,&DWP(0,$Zll));		# load Xi[16]
292238384Sjkim	&mov	($Zhl,&DWP(4,$Zll));
293238384Sjkim	&mov	($Zlh,&DWP(8,$Zll));
294238384Sjkim	&mov	($Zll,&DWP(12,$Zll));
295238384Sjkim
296238384Sjkim	&deposit_rem_4bit(16);
297238384Sjkim
298238384Sjkim    &set_label("x86_outer_loop",16);
299238384Sjkim	&xor	($Zll,&DWP(12,$inp));		# xor with input
300238384Sjkim	&xor	($Zlh,&DWP(8,$inp));
301238384Sjkim	&xor	($Zhl,&DWP(4,$inp));
302238384Sjkim	&xor	($Zhh,&DWP(0,$inp));
303238384Sjkim	&mov	(&DWP(12,"esp"),$Zll);		# dump it on stack
304238384Sjkim	&mov	(&DWP(8,"esp"),$Zlh);
305238384Sjkim	&mov	(&DWP(4,"esp"),$Zhl);
306238384Sjkim	&mov	(&DWP(0,"esp"),$Zhh);
307238384Sjkim
308238384Sjkim	&shr	($Zll,20);
309238384Sjkim	&and	($Zll,0xf0);
310238384Sjkim
311238384Sjkim	if ($unroll) {
312238384Sjkim		&call	("_x86_gmult_4bit_inner");
313238384Sjkim	} else {
314238384Sjkim		&x86_loop(0);
315238384Sjkim		&mov	($inp,&wparam(2));
316238384Sjkim	}
317238384Sjkim	&lea	($inp,&DWP(16,$inp));
318238384Sjkim	&cmp	($inp,&wparam(3));
319238384Sjkim	&mov	(&wparam(2),$inp)	if (!$unroll);
320238384Sjkim	&jb	(&label("x86_outer_loop"));
321238384Sjkim
322238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
323238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
324238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
325238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
326238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
327238384Sjkim	&stack_pop(16+4+1);
328238384Sjkim&function_end("gcm_ghash_4bit".$suffix);
329238384Sjkim
330238384Sjkimif (!$x86only) {{{
331238384Sjkim
332238384Sjkim&static_label("rem_4bit");
333238384Sjkim
334238384Sjkimif (!$sse2) {{	# pure-MMX "May" version...
335238384Sjkim
336238384Sjkim$S=12;		# shift factor for rem_4bit
337238384Sjkim
338238384Sjkim&function_begin_B("_mmx_gmult_4bit_inner");
339238384Sjkim# MMX version performs 3.5 times better on P4 (see comment in non-MMX
340238384Sjkim# routine for further details), 100% better on Opteron, ~70% better
341238384Sjkim# on Core2 and PIII... In other words effort is considered to be well
342238384Sjkim# spent... Since initial release the loop was unrolled in order to
343238384Sjkim# "liberate" register previously used as loop counter. Instead it's
344238384Sjkim# used to optimize critical path in 'Z.hi ^= rem_4bit[Z.lo&0xf]'.
345238384Sjkim# The path involves move of Z.lo from MMX to integer register,
346238384Sjkim# effective address calculation and finally merge of value to Z.hi.
347238384Sjkim# Reference to rem_4bit is scheduled so late that I had to >>4
348238384Sjkim# rem_4bit elements. This resulted in 20-45% procent improvement
349238384Sjkim# on contemporary �-archs.
350238384Sjkim{
351238384Sjkim    my $cnt;
352238384Sjkim    my $rem_4bit = "eax";
353238384Sjkim    my @rem = ($Zhh,$Zll);
354238384Sjkim    my $nhi = $Zhl;
355238384Sjkim    my $nlo = $Zlh;
356238384Sjkim
357238384Sjkim    my ($Zlo,$Zhi) = ("mm0","mm1");
358238384Sjkim    my $tmp = "mm2";
359238384Sjkim
360238384Sjkim	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
361238384Sjkim	&mov	($nhi,$Zll);
362238384Sjkim	&mov	(&LB($nlo),&LB($nhi));
363238384Sjkim	&shl	(&LB($nlo),4);
364238384Sjkim	&and	($nhi,0xf0);
365238384Sjkim	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
366238384Sjkim	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
367238384Sjkim	&movd	($rem[0],$Zlo);
368238384Sjkim
369238384Sjkim	for ($cnt=28;$cnt>=-2;$cnt--) {
370238384Sjkim	    my $odd = $cnt&1;
371238384Sjkim	    my $nix = $odd ? $nlo : $nhi;
372238384Sjkim
373238384Sjkim		&shl	(&LB($nlo),4)			if ($odd);
374238384Sjkim		&psrlq	($Zlo,4);
375238384Sjkim		&movq	($tmp,$Zhi);
376238384Sjkim		&psrlq	($Zhi,4);
377238384Sjkim		&pxor	($Zlo,&QWP(8,$Htbl,$nix));
378238384Sjkim		&mov	(&LB($nlo),&BP($cnt/2,$inp))	if (!$odd && $cnt>=0);
379238384Sjkim		&psllq	($tmp,60);
380238384Sjkim		&and	($nhi,0xf0)			if ($odd);
381238384Sjkim		&pxor	($Zhi,&QWP(0,$rem_4bit,$rem[1],8)) if ($cnt<28);
382238384Sjkim		&and	($rem[0],0xf);
383238384Sjkim		&pxor	($Zhi,&QWP(0,$Htbl,$nix));
384238384Sjkim		&mov	($nhi,$nlo)			if (!$odd && $cnt>=0);
385238384Sjkim		&movd	($rem[1],$Zlo);
386238384Sjkim		&pxor	($Zlo,$tmp);
387238384Sjkim
388238384Sjkim		push	(@rem,shift(@rem));		# "rotate" registers
389238384Sjkim	}
390238384Sjkim
391238384Sjkim	&mov	($inp,&DWP(4,$rem_4bit,$rem[1],8));	# last rem_4bit[rem]
392238384Sjkim
393238384Sjkim	&psrlq	($Zlo,32);	# lower part of Zlo is already there
394238384Sjkim	&movd	($Zhl,$Zhi);
395238384Sjkim	&psrlq	($Zhi,32);
396238384Sjkim	&movd	($Zlh,$Zlo);
397238384Sjkim	&movd	($Zhh,$Zhi);
398238384Sjkim	&shl	($inp,4);	# compensate for rem_4bit[i] being >>4
399238384Sjkim
400238384Sjkim	&bswap	($Zll);
401238384Sjkim	&bswap	($Zhl);
402238384Sjkim	&bswap	($Zlh);
403238384Sjkim	&xor	($Zhh,$inp);
404238384Sjkim	&bswap	($Zhh);
405238384Sjkim
406238384Sjkim	&ret	();
407238384Sjkim}
408238384Sjkim&function_end_B("_mmx_gmult_4bit_inner");
409238384Sjkim
410238384Sjkim&function_begin("gcm_gmult_4bit_mmx");
411238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
412238384Sjkim	&mov	($Htbl,&wparam(1));	# load Htable
413238384Sjkim
414238384Sjkim	&call	(&label("pic_point"));
415238384Sjkim	&set_label("pic_point");
416238384Sjkim	&blindpop("eax");
417238384Sjkim	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
418238384Sjkim
419238384Sjkim	&movz	($Zll,&BP(15,$inp));
420238384Sjkim
421238384Sjkim	&call	("_mmx_gmult_4bit_inner");
422238384Sjkim
423238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
424238384Sjkim	&emms	();
425238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
426238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
427238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
428238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
429238384Sjkim&function_end("gcm_gmult_4bit_mmx");
430238384Sjkim
431238384Sjkim# Streamed version performs 20% better on P4, 7% on Opteron,
432238384Sjkim# 10% on Core2 and PIII...
433238384Sjkim&function_begin("gcm_ghash_4bit_mmx");
434238384Sjkim	&mov	($Zhh,&wparam(0));	# load Xi
435238384Sjkim	&mov	($Htbl,&wparam(1));	# load Htable
436238384Sjkim	&mov	($inp,&wparam(2));	# load in
437238384Sjkim	&mov	($Zlh,&wparam(3));	# load len
438238384Sjkim
439238384Sjkim	&call	(&label("pic_point"));
440238384Sjkim	&set_label("pic_point");
441238384Sjkim	&blindpop("eax");
442238384Sjkim	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
443238384Sjkim
444238384Sjkim	&add	($Zlh,$inp);
445238384Sjkim	&mov	(&wparam(3),$Zlh);	# len to point at the end of input
446238384Sjkim	&stack_push(4+1);		# +1 for stack alignment
447238384Sjkim
448238384Sjkim	&mov	($Zll,&DWP(12,$Zhh));	# load Xi[16]
449238384Sjkim	&mov	($Zhl,&DWP(4,$Zhh));
450238384Sjkim	&mov	($Zlh,&DWP(8,$Zhh));
451238384Sjkim	&mov	($Zhh,&DWP(0,$Zhh));
452238384Sjkim	&jmp	(&label("mmx_outer_loop"));
453238384Sjkim
454238384Sjkim    &set_label("mmx_outer_loop",16);
455238384Sjkim	&xor	($Zll,&DWP(12,$inp));
456238384Sjkim	&xor	($Zhl,&DWP(4,$inp));
457238384Sjkim	&xor	($Zlh,&DWP(8,$inp));
458238384Sjkim	&xor	($Zhh,&DWP(0,$inp));
459238384Sjkim	&mov	(&wparam(2),$inp);
460238384Sjkim	&mov	(&DWP(12,"esp"),$Zll);
461238384Sjkim	&mov	(&DWP(4,"esp"),$Zhl);
462238384Sjkim	&mov	(&DWP(8,"esp"),$Zlh);
463238384Sjkim	&mov	(&DWP(0,"esp"),$Zhh);
464238384Sjkim
465238384Sjkim	&mov	($inp,"esp");
466238384Sjkim	&shr	($Zll,24);
467238384Sjkim
468238384Sjkim	&call	("_mmx_gmult_4bit_inner");
469238384Sjkim
470238384Sjkim	&mov	($inp,&wparam(2));
471238384Sjkim	&lea	($inp,&DWP(16,$inp));
472238384Sjkim	&cmp	($inp,&wparam(3));
473238384Sjkim	&jb	(&label("mmx_outer_loop"));
474238384Sjkim
475238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
476238384Sjkim	&emms	();
477238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
478238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
479238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
480238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
481238384Sjkim
482238384Sjkim	&stack_pop(4+1);
483238384Sjkim&function_end("gcm_ghash_4bit_mmx");
484238384Sjkim
485238384Sjkim}} else {{	# "June" MMX version...
486238384Sjkim		# ... has slower "April" gcm_gmult_4bit_mmx with folded
487238384Sjkim		# loop. This is done to conserve code size...
488238384Sjkim$S=16;		# shift factor for rem_4bit
489238384Sjkim
490238384Sjkimsub mmx_loop() {
491238384Sjkim# MMX version performs 2.8 times better on P4 (see comment in non-MMX
492238384Sjkim# routine for further details), 40% better on Opteron and Core2, 50%
493238384Sjkim# better on PIII... In other words effort is considered to be well
494238384Sjkim# spent...
495238384Sjkim    my $inp = shift;
496238384Sjkim    my $rem_4bit = shift;
497238384Sjkim    my $cnt = $Zhh;
498238384Sjkim    my $nhi = $Zhl;
499238384Sjkim    my $nlo = $Zlh;
500238384Sjkim    my $rem = $Zll;
501238384Sjkim
502238384Sjkim    my ($Zlo,$Zhi) = ("mm0","mm1");
503238384Sjkim    my $tmp = "mm2";
504238384Sjkim
505238384Sjkim	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
506238384Sjkim	&mov	($nhi,$Zll);
507238384Sjkim	&mov	(&LB($nlo),&LB($nhi));
508238384Sjkim	&mov	($cnt,14);
509238384Sjkim	&shl	(&LB($nlo),4);
510238384Sjkim	&and	($nhi,0xf0);
511238384Sjkim	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
512238384Sjkim	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
513238384Sjkim	&movd	($rem,$Zlo);
514238384Sjkim	&jmp	(&label("mmx_loop"));
515238384Sjkim
516238384Sjkim    &set_label("mmx_loop",16);
517238384Sjkim	&psrlq	($Zlo,4);
518238384Sjkim	&and	($rem,0xf);
519238384Sjkim	&movq	($tmp,$Zhi);
520238384Sjkim	&psrlq	($Zhi,4);
521238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
522238384Sjkim	&mov	(&LB($nlo),&BP(0,$inp,$cnt));
523238384Sjkim	&psllq	($tmp,60);
524238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
525238384Sjkim	&dec	($cnt);
526238384Sjkim	&movd	($rem,$Zlo);
527238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
528238384Sjkim	&mov	($nhi,$nlo);
529238384Sjkim	&pxor	($Zlo,$tmp);
530238384Sjkim	&js	(&label("mmx_break"));
531238384Sjkim
532238384Sjkim	&shl	(&LB($nlo),4);
533238384Sjkim	&and	($rem,0xf);
534238384Sjkim	&psrlq	($Zlo,4);
535238384Sjkim	&and	($nhi,0xf0);
536238384Sjkim	&movq	($tmp,$Zhi);
537238384Sjkim	&psrlq	($Zhi,4);
538238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
539238384Sjkim	&psllq	($tmp,60);
540238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
541238384Sjkim	&movd	($rem,$Zlo);
542238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
543238384Sjkim	&pxor	($Zlo,$tmp);
544238384Sjkim	&jmp	(&label("mmx_loop"));
545238384Sjkim
546238384Sjkim    &set_label("mmx_break",16);
547238384Sjkim	&shl	(&LB($nlo),4);
548238384Sjkim	&and	($rem,0xf);
549238384Sjkim	&psrlq	($Zlo,4);
550238384Sjkim	&and	($nhi,0xf0);
551238384Sjkim	&movq	($tmp,$Zhi);
552238384Sjkim	&psrlq	($Zhi,4);
553238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
554238384Sjkim	&psllq	($tmp,60);
555238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
556238384Sjkim	&movd	($rem,$Zlo);
557238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
558238384Sjkim	&pxor	($Zlo,$tmp);
559238384Sjkim
560238384Sjkim	&psrlq	($Zlo,4);
561238384Sjkim	&and	($rem,0xf);
562238384Sjkim	&movq	($tmp,$Zhi);
563238384Sjkim	&psrlq	($Zhi,4);
564238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
565238384Sjkim	&psllq	($tmp,60);
566238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
567238384Sjkim	&movd	($rem,$Zlo);
568238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
569238384Sjkim	&pxor	($Zlo,$tmp);
570238384Sjkim
571238384Sjkim	&psrlq	($Zlo,32);	# lower part of Zlo is already there
572238384Sjkim	&movd	($Zhl,$Zhi);
573238384Sjkim	&psrlq	($Zhi,32);
574238384Sjkim	&movd	($Zlh,$Zlo);
575238384Sjkim	&movd	($Zhh,$Zhi);
576238384Sjkim
577238384Sjkim	&bswap	($Zll);
578238384Sjkim	&bswap	($Zhl);
579238384Sjkim	&bswap	($Zlh);
580238384Sjkim	&bswap	($Zhh);
581238384Sjkim}
582238384Sjkim
583238384Sjkim&function_begin("gcm_gmult_4bit_mmx");
584238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
585238384Sjkim	&mov	($Htbl,&wparam(1));	# load Htable
586238384Sjkim
587238384Sjkim	&call	(&label("pic_point"));
588238384Sjkim	&set_label("pic_point");
589238384Sjkim	&blindpop("eax");
590238384Sjkim	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
591238384Sjkim
592238384Sjkim	&movz	($Zll,&BP(15,$inp));
593238384Sjkim
594238384Sjkim	&mmx_loop($inp,"eax");
595238384Sjkim
596238384Sjkim	&emms	();
597238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
598238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
599238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
600238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
601238384Sjkim&function_end("gcm_gmult_4bit_mmx");
602238384Sjkim
603238384Sjkim######################################################################
604238384Sjkim# Below subroutine is "528B" variant of "4-bit" GCM GHASH function
605238384Sjkim# (see gcm128.c for details). It provides further 20-40% performance
606238384Sjkim# improvement over above mentioned "May" version.
607238384Sjkim
608238384Sjkim&static_label("rem_8bit");
609238384Sjkim
610238384Sjkim&function_begin("gcm_ghash_4bit_mmx");
611238384Sjkim{ my ($Zlo,$Zhi) = ("mm7","mm6");
612238384Sjkim  my $rem_8bit = "esi";
613238384Sjkim  my $Htbl = "ebx";
614238384Sjkim
615238384Sjkim    # parameter block
616238384Sjkim    &mov	("eax",&wparam(0));		# Xi
617238384Sjkim    &mov	("ebx",&wparam(1));		# Htable
618238384Sjkim    &mov	("ecx",&wparam(2));		# inp
619238384Sjkim    &mov	("edx",&wparam(3));		# len
620238384Sjkim    &mov	("ebp","esp");			# original %esp
621238384Sjkim    &call	(&label("pic_point"));
622238384Sjkim    &set_label	("pic_point");
623238384Sjkim    &blindpop	($rem_8bit);
624238384Sjkim    &lea	($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit));
625238384Sjkim
626238384Sjkim    &sub	("esp",512+16+16);		# allocate stack frame...
627238384Sjkim    &and	("esp",-64);			# ...and align it
628238384Sjkim    &sub	("esp",16);			# place for (u8)(H[]<<4)
629238384Sjkim
630238384Sjkim    &add	("edx","ecx");			# pointer to the end of input
631238384Sjkim    &mov	(&DWP(528+16+0,"esp"),"eax");	# save Xi
632238384Sjkim    &mov	(&DWP(528+16+8,"esp"),"edx");	# save inp+len
633238384Sjkim    &mov	(&DWP(528+16+12,"esp"),"ebp");	# save original %esp
634238384Sjkim
635238384Sjkim    { my @lo  = ("mm0","mm1","mm2");
636238384Sjkim      my @hi  = ("mm3","mm4","mm5");
637238384Sjkim      my @tmp = ("mm6","mm7");
638246772Sjkim      my ($off1,$off2,$i) = (0,0,);
639238384Sjkim
640238384Sjkim      &add	($Htbl,128);			# optimize for size
641238384Sjkim      &lea	("edi",&DWP(16+128,"esp"));
642238384Sjkim      &lea	("ebp",&DWP(16+256+128,"esp"));
643238384Sjkim
644238384Sjkim      # decompose Htable (low and high parts are kept separately),
645238384Sjkim      # generate Htable[]>>4, (u8)(Htable[]<<4), save to stack...
646238384Sjkim      for ($i=0;$i<18;$i++) {
647238384Sjkim
648238384Sjkim	&mov	("edx",&DWP(16*$i+8-128,$Htbl))		if ($i<16);
649238384Sjkim	&movq	($lo[0],&QWP(16*$i+8-128,$Htbl))	if ($i<16);
650238384Sjkim	&psllq	($tmp[1],60)				if ($i>1);
651238384Sjkim	&movq	($hi[0],&QWP(16*$i+0-128,$Htbl))	if ($i<16);
652238384Sjkim	&por	($lo[2],$tmp[1])			if ($i>1);
653238384Sjkim	&movq	(&QWP($off1-128,"edi"),$lo[1])		if ($i>0 && $i<17);
654238384Sjkim	&psrlq	($lo[1],4)				if ($i>0 && $i<17);
655238384Sjkim	&movq	(&QWP($off1,"edi"),$hi[1])		if ($i>0 && $i<17);
656238384Sjkim	&movq	($tmp[0],$hi[1])			if ($i>0 && $i<17);
657238384Sjkim	&movq	(&QWP($off2-128,"ebp"),$lo[2])		if ($i>1);
658238384Sjkim	&psrlq	($hi[1],4)				if ($i>0 && $i<17);
659238384Sjkim	&movq	(&QWP($off2,"ebp"),$hi[2])		if ($i>1);
660238384Sjkim	&shl	("edx",4)				if ($i<16);
661238384Sjkim	&mov	(&BP($i,"esp"),&LB("edx"))		if ($i<16);
662238384Sjkim
663238384Sjkim	unshift	(@lo,pop(@lo));			# "rotate" registers
664238384Sjkim	unshift	(@hi,pop(@hi));
665238384Sjkim	unshift	(@tmp,pop(@tmp));
666238384Sjkim	$off1 += 8	if ($i>0);
667238384Sjkim	$off2 += 8	if ($i>1);
668238384Sjkim      }
669238384Sjkim    }
670238384Sjkim
671238384Sjkim    &movq	($Zhi,&QWP(0,"eax"));
672238384Sjkim    &mov	("ebx",&DWP(8,"eax"));
673238384Sjkim    &mov	("edx",&DWP(12,"eax"));		# load Xi
674238384Sjkim
675238384Sjkim&set_label("outer",16);
676238384Sjkim  { my $nlo = "eax";
677238384Sjkim    my $dat = "edx";
678238384Sjkim    my @nhi = ("edi","ebp");
679238384Sjkim    my @rem = ("ebx","ecx");
680238384Sjkim    my @red = ("mm0","mm1","mm2");
681238384Sjkim    my $tmp = "mm3";
682238384Sjkim
683238384Sjkim    &xor	($dat,&DWP(12,"ecx"));		# merge input data
684238384Sjkim    &xor	("ebx",&DWP(8,"ecx"));
685238384Sjkim    &pxor	($Zhi,&QWP(0,"ecx"));
686238384Sjkim    &lea	("ecx",&DWP(16,"ecx"));		# inp+=16
687238384Sjkim    #&mov	(&DWP(528+12,"esp"),$dat);	# save inp^Xi
688238384Sjkim    &mov	(&DWP(528+8,"esp"),"ebx");
689238384Sjkim    &movq	(&QWP(528+0,"esp"),$Zhi);
690238384Sjkim    &mov	(&DWP(528+16+4,"esp"),"ecx");	# save inp
691238384Sjkim
692238384Sjkim    &xor	($nlo,$nlo);
693238384Sjkim    &rol	($dat,8);
694238384Sjkim    &mov	(&LB($nlo),&LB($dat));
695238384Sjkim    &mov	($nhi[1],$nlo);
696238384Sjkim    &and	(&LB($nlo),0x0f);
697238384Sjkim    &shr	($nhi[1],4);
698238384Sjkim    &pxor	($red[0],$red[0]);
699238384Sjkim    &rol	($dat,8);			# next byte
700238384Sjkim    &pxor	($red[1],$red[1]);
701238384Sjkim    &pxor	($red[2],$red[2]);
702238384Sjkim
703238384Sjkim    # Just like in "May" verson modulo-schedule for critical path in
704238384Sjkim    # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final 'pxor'
705238384Sjkim    # is scheduled so late that rem_8bit[] has to be shifted *right*
706238384Sjkim    # by 16, which is why last argument to pinsrw is 2, which
707238384Sjkim    # corresponds to <<32=<<48>>16...
708238384Sjkim    for ($j=11,$i=0;$i<15;$i++) {
709238384Sjkim
710238384Sjkim      if ($i>0) {
711238384Sjkim	&pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
712238384Sjkim	&rol	($dat,8);				# next byte
713238384Sjkim	&pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
714238384Sjkim
715238384Sjkim	&pxor	($Zlo,$tmp);
716238384Sjkim	&pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
717238384Sjkim	&xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
718238384Sjkim      } else {
719238384Sjkim	&movq	($Zlo,&QWP(16,"esp",$nlo,8));
720238384Sjkim	&movq	($Zhi,&QWP(16+128,"esp",$nlo,8));
721238384Sjkim      }
722238384Sjkim
723238384Sjkim	&mov	(&LB($nlo),&LB($dat));
724238384Sjkim	&mov	($dat,&DWP(528+$j,"esp"))		if (--$j%4==0);
725238384Sjkim
726238384Sjkim	&movd	($rem[0],$Zlo);
727238384Sjkim	&movz	($rem[1],&LB($rem[1]))			if ($i>0);
728238384Sjkim	&psrlq	($Zlo,8);				# Z>>=8
729238384Sjkim
730238384Sjkim	&movq	($tmp,$Zhi);
731238384Sjkim	&mov	($nhi[0],$nlo);
732238384Sjkim	&psrlq	($Zhi,8);
733238384Sjkim
734238384Sjkim	&pxor	($Zlo,&QWP(16+256+0,"esp",$nhi[1],8));	# Z^=H[nhi]>>4
735238384Sjkim	&and	(&LB($nlo),0x0f);
736238384Sjkim	&psllq	($tmp,56);
737238384Sjkim
738238384Sjkim	&pxor	($Zhi,$red[1])				if ($i>1);
739238384Sjkim	&shr	($nhi[0],4);
740238384Sjkim	&pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2)	if ($i>0);
741238384Sjkim
742238384Sjkim	unshift	(@red,pop(@red));			# "rotate" registers
743238384Sjkim	unshift	(@rem,pop(@rem));
744238384Sjkim	unshift	(@nhi,pop(@nhi));
745238384Sjkim    }
746238384Sjkim
747238384Sjkim    &pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
748238384Sjkim    &pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
749238384Sjkim    &xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
750238384Sjkim
751238384Sjkim    &pxor	($Zlo,$tmp);
752238384Sjkim    &pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
753238384Sjkim    &movz	($rem[1],&LB($rem[1]));
754238384Sjkim
755238384Sjkim    &pxor	($red[2],$red[2]);			# clear 2nd word
756238384Sjkim    &psllq	($red[1],4);
757238384Sjkim
758238384Sjkim    &movd	($rem[0],$Zlo);
759238384Sjkim    &psrlq	($Zlo,4);				# Z>>=4
760238384Sjkim
761238384Sjkim    &movq	($tmp,$Zhi);
762238384Sjkim    &psrlq	($Zhi,4);
763238384Sjkim    &shl	($rem[0],4);				# rem<<4
764238384Sjkim
765238384Sjkim    &pxor	($Zlo,&QWP(16,"esp",$nhi[1],8));	# Z^=H[nhi]
766238384Sjkim    &psllq	($tmp,60);
767238384Sjkim    &movz	($rem[0],&LB($rem[0]));
768238384Sjkim
769238384Sjkim    &pxor	($Zlo,$tmp);
770238384Sjkim    &pxor	($Zhi,&QWP(16+128,"esp",$nhi[1],8));
771238384Sjkim
772238384Sjkim    &pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2);
773238384Sjkim    &pxor	($Zhi,$red[1]);
774238384Sjkim
775238384Sjkim    &movd	($dat,$Zlo);
776238384Sjkim    &pinsrw	($red[2],&WP(0,$rem_8bit,$rem[0],2),3);	# last is <<48
777238384Sjkim
778238384Sjkim    &psllq	($red[0],12);				# correct by <<16>>4
779238384Sjkim    &pxor	($Zhi,$red[0]);
780238384Sjkim    &psrlq	($Zlo,32);
781238384Sjkim    &pxor	($Zhi,$red[2]);
782238384Sjkim
783238384Sjkim    &mov	("ecx",&DWP(528+16+4,"esp"));	# restore inp
784238384Sjkim    &movd	("ebx",$Zlo);
785238384Sjkim    &movq	($tmp,$Zhi);			# 01234567
786238384Sjkim    &psllw	($Zhi,8);			# 1.3.5.7.
787238384Sjkim    &psrlw	($tmp,8);			# .0.2.4.6
788238384Sjkim    &por	($Zhi,$tmp);			# 10325476
789238384Sjkim    &bswap	($dat);
790238384Sjkim    &pshufw	($Zhi,$Zhi,0b00011011);		# 76543210
791238384Sjkim    &bswap	("ebx");
792238384Sjkim
793238384Sjkim    &cmp	("ecx",&DWP(528+16+8,"esp"));	# are we done?
794238384Sjkim    &jne	(&label("outer"));
795238384Sjkim  }
796238384Sjkim
797238384Sjkim    &mov	("eax",&DWP(528+16+0,"esp"));	# restore Xi
798238384Sjkim    &mov	(&DWP(12,"eax"),"edx");
799238384Sjkim    &mov	(&DWP(8,"eax"),"ebx");
800238384Sjkim    &movq	(&QWP(0,"eax"),$Zhi);
801238384Sjkim
802238384Sjkim    &mov	("esp",&DWP(528+16+12,"esp"));	# restore original %esp
803238384Sjkim    &emms	();
804238384Sjkim}
805238384Sjkim&function_end("gcm_ghash_4bit_mmx");
806238384Sjkim}}
807238384Sjkim
808238384Sjkimif ($sse2) {{
809238384Sjkim######################################################################
810238384Sjkim# PCLMULQDQ version.
811238384Sjkim
812238384Sjkim$Xip="eax";
813238384Sjkim$Htbl="edx";
814238384Sjkim$const="ecx";
815238384Sjkim$inp="esi";
816238384Sjkim$len="ebx";
817238384Sjkim
818238384Sjkim($Xi,$Xhi)=("xmm0","xmm1");	$Hkey="xmm2";
819238384Sjkim($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
820238384Sjkim($Xn,$Xhn)=("xmm6","xmm7");
821238384Sjkim
822238384Sjkim&static_label("bswap");
823238384Sjkim
824238384Sjkimsub clmul64x64_T2 {	# minimal "register" pressure
825238384Sjkimmy ($Xhi,$Xi,$Hkey)=@_;
826238384Sjkim
827238384Sjkim	&movdqa		($Xhi,$Xi);		#
828238384Sjkim	&pshufd		($T1,$Xi,0b01001110);
829238384Sjkim	&pshufd		($T2,$Hkey,0b01001110);
830238384Sjkim	&pxor		($T1,$Xi);		#
831238384Sjkim	&pxor		($T2,$Hkey);
832238384Sjkim
833238384Sjkim	&pclmulqdq	($Xi,$Hkey,0x00);	#######
834238384Sjkim	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
835238384Sjkim	&pclmulqdq	($T1,$T2,0x00);		#######
836238384Sjkim	&xorps		($T1,$Xi);		#
837238384Sjkim	&xorps		($T1,$Xhi);		#
838238384Sjkim
839238384Sjkim	&movdqa		($T2,$T1);		#
840238384Sjkim	&psrldq		($T1,8);
841238384Sjkim	&pslldq		($T2,8);		#
842238384Sjkim	&pxor		($Xhi,$T1);
843238384Sjkim	&pxor		($Xi,$T2);		#
844238384Sjkim}
845238384Sjkim
846238384Sjkimsub clmul64x64_T3 {
847238384Sjkim# Even though this subroutine offers visually better ILP, it
848238384Sjkim# was empirically found to be a tad slower than above version.
849238384Sjkim# At least in gcm_ghash_clmul context. But it's just as well,
850238384Sjkim# because loop modulo-scheduling is possible only thanks to
851238384Sjkim# minimized "register" pressure...
852238384Sjkimmy ($Xhi,$Xi,$Hkey)=@_;
853238384Sjkim
854238384Sjkim	&movdqa		($T1,$Xi);		#
855238384Sjkim	&movdqa		($Xhi,$Xi);
856238384Sjkim	&pclmulqdq	($Xi,$Hkey,0x00);	#######
857238384Sjkim	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
858238384Sjkim	&pshufd		($T2,$T1,0b01001110);	#
859238384Sjkim	&pshufd		($T3,$Hkey,0b01001110);
860238384Sjkim	&pxor		($T2,$T1);		#
861238384Sjkim	&pxor		($T3,$Hkey);
862238384Sjkim	&pclmulqdq	($T2,$T3,0x00);		#######
863238384Sjkim	&pxor		($T2,$Xi);		#
864238384Sjkim	&pxor		($T2,$Xhi);		#
865238384Sjkim
866238384Sjkim	&movdqa		($T3,$T2);		#
867238384Sjkim	&psrldq		($T2,8);
868238384Sjkim	&pslldq		($T3,8);		#
869238384Sjkim	&pxor		($Xhi,$T2);
870238384Sjkim	&pxor		($Xi,$T3);		#
871238384Sjkim}
872238384Sjkim
873238384Sjkimif (1) {		# Algorithm 9 with <<1 twist.
874238384Sjkim			# Reduction is shorter and uses only two
875238384Sjkim			# temporary registers, which makes it better
876238384Sjkim			# candidate for interleaving with 64x64
877238384Sjkim			# multiplication. Pre-modulo-scheduled loop
878238384Sjkim			# was found to be ~20% faster than Algorithm 5
879238384Sjkim			# below. Algorithm 9 was therefore chosen for
880238384Sjkim			# further optimization...
881238384Sjkim
882238384Sjkimsub reduction_alg9 {	# 17/13 times faster than Intel version
883238384Sjkimmy ($Xhi,$Xi) = @_;
884238384Sjkim
885238384Sjkim	# 1st phase
886246772Sjkim	&movdqa		($T1,$Xi);		#
887238384Sjkim	&psllq		($Xi,1);
888238384Sjkim	&pxor		($Xi,$T1);		#
889238384Sjkim	&psllq		($Xi,5);		#
890238384Sjkim	&pxor		($Xi,$T1);		#
891238384Sjkim	&psllq		($Xi,57);		#
892238384Sjkim	&movdqa		($T2,$Xi);		#
893238384Sjkim	&pslldq		($Xi,8);
894238384Sjkim	&psrldq		($T2,8);		#
895238384Sjkim	&pxor		($Xi,$T1);
896238384Sjkim	&pxor		($Xhi,$T2);		#
897238384Sjkim
898238384Sjkim	# 2nd phase
899238384Sjkim	&movdqa		($T2,$Xi);
900238384Sjkim	&psrlq		($Xi,5);
901238384Sjkim	&pxor		($Xi,$T2);		#
902238384Sjkim	&psrlq		($Xi,1);		#
903238384Sjkim	&pxor		($Xi,$T2);		#
904238384Sjkim	&pxor		($T2,$Xhi);
905238384Sjkim	&psrlq		($Xi,1);		#
906238384Sjkim	&pxor		($Xi,$T2);		#
907238384Sjkim}
908238384Sjkim
909238384Sjkim&function_begin_B("gcm_init_clmul");
910238384Sjkim	&mov		($Htbl,&wparam(0));
911238384Sjkim	&mov		($Xip,&wparam(1));
912238384Sjkim
913238384Sjkim	&call		(&label("pic"));
914238384Sjkim&set_label("pic");
915238384Sjkim	&blindpop	($const);
916238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
917238384Sjkim
918238384Sjkim	&movdqu		($Hkey,&QWP(0,$Xip));
919238384Sjkim	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
920238384Sjkim
921238384Sjkim	# <<1 twist
922238384Sjkim	&pshufd		($T2,$Hkey,0b11111111);	# broadcast uppermost dword
923238384Sjkim	&movdqa		($T1,$Hkey);
924238384Sjkim	&psllq		($Hkey,1);
925238384Sjkim	&pxor		($T3,$T3);		#
926238384Sjkim	&psrlq		($T1,63);
927238384Sjkim	&pcmpgtd	($T3,$T2);		# broadcast carry bit
928238384Sjkim	&pslldq		($T1,8);
929238384Sjkim	&por		($Hkey,$T1);		# H<<=1
930238384Sjkim
931238384Sjkim	# magic reduction
932238384Sjkim	&pand		($T3,&QWP(16,$const));	# 0x1c2_polynomial
933238384Sjkim	&pxor		($Hkey,$T3);		# if(carry) H^=0x1c2_polynomial
934238384Sjkim
935238384Sjkim	# calculate H^2
936238384Sjkim	&movdqa		($Xi,$Hkey);
937238384Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey);
938238384Sjkim	&reduction_alg9	($Xhi,$Xi);
939238384Sjkim
940238384Sjkim	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
941238384Sjkim	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
942238384Sjkim
943238384Sjkim	&ret		();
944238384Sjkim&function_end_B("gcm_init_clmul");
945238384Sjkim
946238384Sjkim&function_begin_B("gcm_gmult_clmul");
947238384Sjkim	&mov		($Xip,&wparam(0));
948238384Sjkim	&mov		($Htbl,&wparam(1));
949238384Sjkim
950238384Sjkim	&call		(&label("pic"));
951238384Sjkim&set_label("pic");
952238384Sjkim	&blindpop	($const);
953238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
954238384Sjkim
955238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
956238384Sjkim	&movdqa		($T3,&QWP(0,$const));
957238384Sjkim	&movups		($Hkey,&QWP(0,$Htbl));
958238384Sjkim	&pshufb		($Xi,$T3);
959238384Sjkim
960238384Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey);
961238384Sjkim	&reduction_alg9	($Xhi,$Xi);
962238384Sjkim
963238384Sjkim	&pshufb		($Xi,$T3);
964238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
965238384Sjkim
966238384Sjkim	&ret	();
967238384Sjkim&function_end_B("gcm_gmult_clmul");
968238384Sjkim
969238384Sjkim&function_begin("gcm_ghash_clmul");
970238384Sjkim	&mov		($Xip,&wparam(0));
971238384Sjkim	&mov		($Htbl,&wparam(1));
972238384Sjkim	&mov		($inp,&wparam(2));
973238384Sjkim	&mov		($len,&wparam(3));
974238384Sjkim
975238384Sjkim	&call		(&label("pic"));
976238384Sjkim&set_label("pic");
977238384Sjkim	&blindpop	($const);
978238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
979238384Sjkim
980238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
981238384Sjkim	&movdqa		($T3,&QWP(0,$const));
982238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));
983238384Sjkim	&pshufb		($Xi,$T3);
984238384Sjkim
985238384Sjkim	&sub		($len,0x10);
986238384Sjkim	&jz		(&label("odd_tail"));
987238384Sjkim
988238384Sjkim	#######
989238384Sjkim	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
990238384Sjkim	#	[(H*Ii+1) + (H*Xi+1)] mod P =
991238384Sjkim	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
992238384Sjkim	#
993238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
994238384Sjkim	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
995238384Sjkim	&pshufb		($T1,$T3);
996238384Sjkim	&pshufb		($Xn,$T3);
997238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
998238384Sjkim
999238384Sjkim	&clmul64x64_T2	($Xhn,$Xn,$Hkey);	# H*Ii+1
1000238384Sjkim	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
1001238384Sjkim
1002238384Sjkim	&lea		($inp,&DWP(32,$inp));	# i+=2
1003238384Sjkim	&sub		($len,0x20);
1004238384Sjkim	&jbe		(&label("even_tail"));
1005238384Sjkim
1006238384Sjkim&set_label("mod_loop");
1007238384Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1008238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1009238384Sjkim	&movups		($Hkey,&QWP(0,$Htbl));	# load H
1010238384Sjkim
1011238384Sjkim	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1012238384Sjkim	&pxor		($Xhi,$Xhn);
1013238384Sjkim
1014238384Sjkim	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1015238384Sjkim	&pshufb		($T1,$T3);
1016238384Sjkim	&pshufb		($Xn,$T3);
1017238384Sjkim
1018238384Sjkim	&movdqa		($T3,$Xn);		#&clmul64x64_TX	($Xhn,$Xn,$Hkey); H*Ii+1
1019238384Sjkim	&movdqa		($Xhn,$Xn);
1020238384Sjkim	 &pxor		($Xhi,$T1);		# "Ii+Xi", consume early
1021238384Sjkim
1022246772Sjkim	  &movdqa	($T1,$Xi);		#&reduction_alg9($Xhi,$Xi); 1st phase
1023238384Sjkim	  &psllq	($Xi,1);
1024238384Sjkim	  &pxor		($Xi,$T1);		#
1025238384Sjkim	  &psllq	($Xi,5);		#
1026238384Sjkim	  &pxor		($Xi,$T1);		#
1027238384Sjkim	&pclmulqdq	($Xn,$Hkey,0x00);	#######
1028238384Sjkim	  &psllq	($Xi,57);		#
1029238384Sjkim	  &movdqa	($T2,$Xi);		#
1030238384Sjkim	  &pslldq	($Xi,8);
1031238384Sjkim	  &psrldq	($T2,8);		#
1032238384Sjkim	  &pxor		($Xi,$T1);
1033238384Sjkim	&pshufd		($T1,$T3,0b01001110);
1034238384Sjkim	  &pxor		($Xhi,$T2);		#
1035238384Sjkim	&pxor		($T1,$T3);
1036238384Sjkim	&pshufd		($T3,$Hkey,0b01001110);
1037238384Sjkim	&pxor		($T3,$Hkey);		#
1038238384Sjkim
1039238384Sjkim	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
1040238384Sjkim	  &movdqa	($T2,$Xi);		# 2nd phase
1041238384Sjkim	  &psrlq	($Xi,5);
1042238384Sjkim	  &pxor		($Xi,$T2);		#
1043238384Sjkim	  &psrlq	($Xi,1);		#
1044238384Sjkim	  &pxor		($Xi,$T2);		#
1045238384Sjkim	  &pxor		($T2,$Xhi);
1046238384Sjkim	  &psrlq	($Xi,1);		#
1047238384Sjkim	  &pxor		($Xi,$T2);		#
1048238384Sjkim
1049238384Sjkim	&pclmulqdq	($T1,$T3,0x00);		#######
1050238384Sjkim	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
1051238384Sjkim	&xorps		($T1,$Xn);		#
1052238384Sjkim	&xorps		($T1,$Xhn);		#
1053238384Sjkim
1054238384Sjkim	&movdqa		($T3,$T1);		#
1055238384Sjkim	&psrldq		($T1,8);
1056238384Sjkim	&pslldq		($T3,8);		#
1057238384Sjkim	&pxor		($Xhn,$T1);
1058238384Sjkim	&pxor		($Xn,$T3);		#
1059238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1060238384Sjkim
1061238384Sjkim	&lea		($inp,&DWP(32,$inp));
1062238384Sjkim	&sub		($len,0x20);
1063238384Sjkim	&ja		(&label("mod_loop"));
1064238384Sjkim
1065238384Sjkim&set_label("even_tail");
1066238384Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1067238384Sjkim
1068238384Sjkim	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1069238384Sjkim	&pxor		($Xhi,$Xhn);
1070238384Sjkim
1071238384Sjkim	&reduction_alg9	($Xhi,$Xi);
1072238384Sjkim
1073238384Sjkim	&test		($len,$len);
1074238384Sjkim	&jnz		(&label("done"));
1075238384Sjkim
1076238384Sjkim	&movups		($Hkey,&QWP(0,$Htbl));	# load H
1077238384Sjkim&set_label("odd_tail");
1078238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1079238384Sjkim	&pshufb		($T1,$T3);
1080238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1081238384Sjkim
1082238384Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
1083238384Sjkim	&reduction_alg9	($Xhi,$Xi);
1084238384Sjkim
1085238384Sjkim&set_label("done");
1086238384Sjkim	&pshufb		($Xi,$T3);
1087238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
1088238384Sjkim&function_end("gcm_ghash_clmul");
1089238384Sjkim
1090238384Sjkim} else {		# Algorith 5. Kept for reference purposes.
1091238384Sjkim
1092238384Sjkimsub reduction_alg5 {	# 19/16 times faster than Intel version
1093238384Sjkimmy ($Xhi,$Xi)=@_;
1094238384Sjkim
1095238384Sjkim	# <<1
1096238384Sjkim	&movdqa		($T1,$Xi);		#
1097238384Sjkim	&movdqa		($T2,$Xhi);
1098238384Sjkim	&pslld		($Xi,1);
1099238384Sjkim	&pslld		($Xhi,1);		#
1100238384Sjkim	&psrld		($T1,31);
1101238384Sjkim	&psrld		($T2,31);		#
1102238384Sjkim	&movdqa		($T3,$T1);
1103238384Sjkim	&pslldq		($T1,4);
1104238384Sjkim	&psrldq		($T3,12);		#
1105238384Sjkim	&pslldq		($T2,4);
1106238384Sjkim	&por		($Xhi,$T3);		#
1107238384Sjkim	&por		($Xi,$T1);
1108238384Sjkim	&por		($Xhi,$T2);		#
1109238384Sjkim
1110238384Sjkim	# 1st phase
1111238384Sjkim	&movdqa		($T1,$Xi);
1112238384Sjkim	&movdqa		($T2,$Xi);
1113238384Sjkim	&movdqa		($T3,$Xi);		#
1114238384Sjkim	&pslld		($T1,31);
1115238384Sjkim	&pslld		($T2,30);
1116238384Sjkim	&pslld		($Xi,25);		#
1117238384Sjkim	&pxor		($T1,$T2);
1118238384Sjkim	&pxor		($T1,$Xi);		#
1119238384Sjkim	&movdqa		($T2,$T1);		#
1120238384Sjkim	&pslldq		($T1,12);
1121238384Sjkim	&psrldq		($T2,4);		#
1122238384Sjkim	&pxor		($T3,$T1);
1123238384Sjkim
1124238384Sjkim	# 2nd phase
1125238384Sjkim	&pxor		($Xhi,$T3);		#
1126238384Sjkim	&movdqa		($Xi,$T3);
1127238384Sjkim	&movdqa		($T1,$T3);
1128238384Sjkim	&psrld		($Xi,1);		#
1129238384Sjkim	&psrld		($T1,2);
1130238384Sjkim	&psrld		($T3,7);		#
1131238384Sjkim	&pxor		($Xi,$T1);
1132238384Sjkim	&pxor		($Xhi,$T2);
1133238384Sjkim	&pxor		($Xi,$T3);		#
1134238384Sjkim	&pxor		($Xi,$Xhi);		#
1135238384Sjkim}
1136238384Sjkim
1137238384Sjkim&function_begin_B("gcm_init_clmul");
1138238384Sjkim	&mov		($Htbl,&wparam(0));
1139238384Sjkim	&mov		($Xip,&wparam(1));
1140238384Sjkim
1141238384Sjkim	&call		(&label("pic"));
1142238384Sjkim&set_label("pic");
1143238384Sjkim	&blindpop	($const);
1144238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1145238384Sjkim
1146238384Sjkim	&movdqu		($Hkey,&QWP(0,$Xip));
1147238384Sjkim	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
1148238384Sjkim
1149238384Sjkim	# calculate H^2
1150238384Sjkim	&movdqa		($Xi,$Hkey);
1151238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
1152238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1153238384Sjkim
1154238384Sjkim	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
1155238384Sjkim	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
1156238384Sjkim
1157238384Sjkim	&ret		();
1158238384Sjkim&function_end_B("gcm_init_clmul");
1159238384Sjkim
1160238384Sjkim&function_begin_B("gcm_gmult_clmul");
1161238384Sjkim	&mov		($Xip,&wparam(0));
1162238384Sjkim	&mov		($Htbl,&wparam(1));
1163238384Sjkim
1164238384Sjkim	&call		(&label("pic"));
1165238384Sjkim&set_label("pic");
1166238384Sjkim	&blindpop	($const);
1167238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1168238384Sjkim
1169238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
1170238384Sjkim	&movdqa		($Xn,&QWP(0,$const));
1171238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));
1172238384Sjkim	&pshufb		($Xi,$Xn);
1173238384Sjkim
1174238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
1175238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1176238384Sjkim
1177238384Sjkim	&pshufb		($Xi,$Xn);
1178238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
1179238384Sjkim
1180238384Sjkim	&ret	();
1181238384Sjkim&function_end_B("gcm_gmult_clmul");
1182238384Sjkim
1183238384Sjkim&function_begin("gcm_ghash_clmul");
1184238384Sjkim	&mov		($Xip,&wparam(0));
1185238384Sjkim	&mov		($Htbl,&wparam(1));
1186238384Sjkim	&mov		($inp,&wparam(2));
1187238384Sjkim	&mov		($len,&wparam(3));
1188238384Sjkim
1189238384Sjkim	&call		(&label("pic"));
1190238384Sjkim&set_label("pic");
1191238384Sjkim	&blindpop	($const);
1192238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1193238384Sjkim
1194238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
1195238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1196238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));
1197238384Sjkim	&pshufb		($Xi,$T3);
1198238384Sjkim
1199238384Sjkim	&sub		($len,0x10);
1200238384Sjkim	&jz		(&label("odd_tail"));
1201238384Sjkim
1202238384Sjkim	#######
1203238384Sjkim	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1204238384Sjkim	#	[(H*Ii+1) + (H*Xi+1)] mod P =
1205238384Sjkim	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
1206238384Sjkim	#
1207238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1208238384Sjkim	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1209238384Sjkim	&pshufb		($T1,$T3);
1210238384Sjkim	&pshufb		($Xn,$T3);
1211238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1212238384Sjkim
1213238384Sjkim	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1214238384Sjkim	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1215238384Sjkim
1216238384Sjkim	&sub		($len,0x20);
1217238384Sjkim	&lea		($inp,&DWP(32,$inp));	# i+=2
1218238384Sjkim	&jbe		(&label("even_tail"));
1219238384Sjkim
1220238384Sjkim&set_label("mod_loop");
1221238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1222238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1223238384Sjkim
1224238384Sjkim	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1225238384Sjkim	&pxor		($Xhi,$Xhn);
1226238384Sjkim
1227238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1228238384Sjkim
1229238384Sjkim	#######
1230238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1231238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1232238384Sjkim	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1233238384Sjkim	&pshufb		($T1,$T3);
1234238384Sjkim	&pshufb		($Xn,$T3);
1235238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1236238384Sjkim
1237238384Sjkim	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1238238384Sjkim	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1239238384Sjkim
1240238384Sjkim	&sub		($len,0x20);
1241238384Sjkim	&lea		($inp,&DWP(32,$inp));
1242238384Sjkim	&ja		(&label("mod_loop"));
1243238384Sjkim
1244238384Sjkim&set_label("even_tail");
1245238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1246238384Sjkim
1247238384Sjkim	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1248238384Sjkim	&pxor		($Xhi,$Xhn);
1249238384Sjkim
1250238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1251238384Sjkim
1252238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1253238384Sjkim	&test		($len,$len);
1254238384Sjkim	&jnz		(&label("done"));
1255238384Sjkim
1256238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1257238384Sjkim&set_label("odd_tail");
1258238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1259238384Sjkim	&pshufb		($T1,$T3);
1260238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1261238384Sjkim
1262238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
1263238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1264238384Sjkim
1265238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1266238384Sjkim&set_label("done");
1267238384Sjkim	&pshufb		($Xi,$T3);
1268238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
1269238384Sjkim&function_end("gcm_ghash_clmul");
1270238384Sjkim
1271238384Sjkim}
1272238384Sjkim
1273238384Sjkim&set_label("bswap",64);
1274238384Sjkim	&data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
1275238384Sjkim	&data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2);	# 0x1c2_polynomial
1276238384Sjkim}}	# $sse2
1277238384Sjkim
1278238384Sjkim&set_label("rem_4bit",64);
1279238384Sjkim	&data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S);
1280238384Sjkim	&data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S);
1281238384Sjkim	&data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S);
1282238384Sjkim	&data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S);
1283238384Sjkim&set_label("rem_8bit",64);
1284238384Sjkim	&data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E);
1285238384Sjkim	&data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E);
1286238384Sjkim	&data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E);
1287238384Sjkim	&data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E);
1288238384Sjkim	&data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E);
1289238384Sjkim	&data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E);
1290238384Sjkim	&data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E);
1291238384Sjkim	&data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E);
1292238384Sjkim	&data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE);
1293238384Sjkim	&data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE);
1294238384Sjkim	&data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE);
1295238384Sjkim	&data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE);
1296238384Sjkim	&data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E);
1297238384Sjkim	&data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E);
1298238384Sjkim	&data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE);
1299238384Sjkim	&data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE);
1300238384Sjkim	&data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E);
1301238384Sjkim	&data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E);
1302238384Sjkim	&data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E);
1303238384Sjkim	&data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E);
1304238384Sjkim	&data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E);
1305238384Sjkim	&data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E);
1306238384Sjkim	&data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E);
1307238384Sjkim	&data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E);
1308238384Sjkim	&data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE);
1309238384Sjkim	&data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE);
1310238384Sjkim	&data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE);
1311238384Sjkim	&data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE);
1312238384Sjkim	&data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E);
1313238384Sjkim	&data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E);
1314238384Sjkim	&data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE);
1315238384Sjkim	&data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE);
1316238384Sjkim}}}	# !$x86only
1317238384Sjkim
1318238384Sjkim&asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
1319238384Sjkim&asm_finish();
1320238384Sjkim
1321238384Sjkim# A question was risen about choice of vanilla MMX. Or rather why wasn't
1322238384Sjkim# SSE2 chosen instead? In addition to the fact that MMX runs on legacy
1323238384Sjkim# CPUs such as PIII, "4-bit" MMX version was observed to provide better
1324238384Sjkim# performance than *corresponding* SSE2 one even on contemporary CPUs.
1325238384Sjkim# SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
1326238384Sjkim# implementation featuring full range of lookup-table sizes, but with
1327238384Sjkim# per-invocation lookup table setup. Latter means that table size is
1328238384Sjkim# chosen depending on how much data is to be hashed in every given call,
1329238384Sjkim# more data - larger table. Best reported result for Core2 is ~4 cycles
1330238384Sjkim# per processed byte out of 64KB block. This number accounts even for
1331238384Sjkim# 64KB table setup overhead. As discussed in gcm128.c we choose to be
1332238384Sjkim# more conservative in respect to lookup table sizes, but how do the
1333238384Sjkim# results compare? Minimalistic "256B" MMX version delivers ~11 cycles
1334238384Sjkim# on same platform. As also discussed in gcm128.c, next in line "8-bit
1335238384Sjkim# Shoup's" or "4KB" method should deliver twice the performance of
1336238384Sjkim# "256B" one, in other words not worse than ~6 cycles per byte. It
1337238384Sjkim# should be also be noted that in SSE2 case improvement can be "super-
1338238384Sjkim# linear," i.e. more than twice, mostly because >>8 maps to single
1339238384Sjkim# instruction on SSE2 register. This is unlike "4-bit" case when >>4
1340238384Sjkim# maps to same amount of instructions in both MMX and SSE2 cases.
1341238384Sjkim# Bottom line is that switch to SSE2 is considered to be justifiable
1342238384Sjkim# only in case we choose to implement "8-bit" method...
1343