1238384Sjkim#!/usr/bin/env perl
2238384Sjkim#
3238384Sjkim# ====================================================================
4238384Sjkim# Written by Andy Polyakov <appro@openssl.org> for the OpenSSL
5238384Sjkim# project. The module is, however, dual licensed under OpenSSL and
6238384Sjkim# CRYPTOGAMS licenses depending on where you obtain it. For further
7238384Sjkim# details see http://www.openssl.org/~appro/cryptogams/.
8238384Sjkim# ====================================================================
9238384Sjkim#
10238384Sjkim# March, May, June 2010
11238384Sjkim#
12238384Sjkim# The module implements "4-bit" GCM GHASH function and underlying
13238384Sjkim# single multiplication operation in GF(2^128). "4-bit" means that it
14238384Sjkim# uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
15290207Sjkim# code paths: vanilla x86 and vanilla SSE. Former will be executed on
16290207Sjkim# 486 and Pentium, latter on all others. SSE GHASH features so called
17238384Sjkim# "528B" variant of "4-bit" method utilizing additional 256+16 bytes
18238384Sjkim# of per-key storage [+512 bytes shared table]. Performance results
19238384Sjkim# are for streamed GHASH subroutine and are expressed in cycles per
20238384Sjkim# processed byte, less is better:
21238384Sjkim#
22290207Sjkim#		gcc 2.95.3(*)	SSE assembler	x86 assembler
23238384Sjkim#
24238384Sjkim# Pentium	105/111(**)	-		50
25238384Sjkim# PIII		68 /75		12.2		24
26238384Sjkim# P4		125/125		17.8		84(***)
27238384Sjkim# Opteron	66 /70		10.1		30
28238384Sjkim# Core2		54 /67		8.4		18
29290207Sjkim# Atom		105/105		16.8		53
30290207Sjkim# VIA Nano	69 /71		13.0		27
31238384Sjkim#
32238384Sjkim# (*)	gcc 3.4.x was observed to generate few percent slower code,
33238384Sjkim#	which is one of reasons why 2.95.3 results were chosen,
34238384Sjkim#	another reason is lack of 3.4.x results for older CPUs;
35290207Sjkim#	comparison with SSE results is not completely fair, because C
36238384Sjkim#	results are for vanilla "256B" implementation, while
37238384Sjkim#	assembler results are for "528B";-)
38238384Sjkim# (**)	second number is result for code compiled with -fPIC flag,
39238384Sjkim#	which is actually more relevant, because assembler code is
40238384Sjkim#	position-independent;
41238384Sjkim# (***)	see comment in non-MMX routine for further details;
42238384Sjkim#
43238384Sjkim# To summarize, it's >2-5 times faster than gcc-generated code. To
44238384Sjkim# anchor it to something else SHA1 assembler processes one byte in
45290207Sjkim# ~7 cycles on contemporary x86 cores. As for choice of MMX/SSE
46290207Sjkim# in particular, see comment at the end of the file...
47238384Sjkim
48238384Sjkim# May 2010
49238384Sjkim#
50238384Sjkim# Add PCLMULQDQ version performing at 2.10 cycles per processed byte.
51238384Sjkim# The question is how close is it to theoretical limit? The pclmulqdq
52238384Sjkim# instruction latency appears to be 14 cycles and there can't be more
53238384Sjkim# than 2 of them executing at any given time. This means that single
54238384Sjkim# Karatsuba multiplication would take 28 cycles *plus* few cycles for
55238384Sjkim# pre- and post-processing. Then multiplication has to be followed by
56238384Sjkim# modulo-reduction. Given that aggregated reduction method [see
57238384Sjkim# "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
58238384Sjkim# white paper by Intel] allows you to perform reduction only once in
59238384Sjkim# a while we can assume that asymptotic performance can be estimated
60238384Sjkim# as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
61238384Sjkim# and Naggr is the aggregation factor.
62238384Sjkim#
63238384Sjkim# Before we proceed to this implementation let's have closer look at
64238384Sjkim# the best-performing code suggested by Intel in their white paper.
65238384Sjkim# By tracing inter-register dependencies Tmod is estimated as ~19
66238384Sjkim# cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per
67238384Sjkim# processed byte. As implied, this is quite optimistic estimate,
68238384Sjkim# because it does not account for Karatsuba pre- and post-processing,
69238384Sjkim# which for a single multiplication is ~5 cycles. Unfortunately Intel
70238384Sjkim# does not provide performance data for GHASH alone. But benchmarking
71238384Sjkim# AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt
72238384Sjkim# alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that
73238384Sjkim# the result accounts even for pre-computing of degrees of the hash
74238384Sjkim# key H, but its portion is negligible at 16KB buffer size.
75238384Sjkim#
76238384Sjkim# Moving on to the implementation in question. Tmod is estimated as
77238384Sjkim# ~13 cycles and Naggr is 2, giving asymptotic performance of ...
78238384Sjkim# 2.16. How is it possible that measured performance is better than
79238384Sjkim# optimistic theoretical estimate? There is one thing Intel failed
80238384Sjkim# to recognize. By serializing GHASH with CTR in same subroutine
81238384Sjkim# former's performance is really limited to above (Tmul + Tmod/Naggr)
82238384Sjkim# equation. But if GHASH procedure is detached, the modulo-reduction
83238384Sjkim# can be interleaved with Naggr-1 multiplications at instruction level
84238384Sjkim# and under ideal conditions even disappear from the equation. So that
85238384Sjkim# optimistic theoretical estimate for this implementation is ...
86238384Sjkim# 28/16=1.75, and not 2.16. Well, it's probably way too optimistic,
87238384Sjkim# at least for such small Naggr. I'd argue that (28+Tproc/Naggr),
88238384Sjkim# where Tproc is time required for Karatsuba pre- and post-processing,
89238384Sjkim# is more realistic estimate. In this case it gives ... 1.91 cycles.
90238384Sjkim# Or in other words, depending on how well we can interleave reduction
91238384Sjkim# and one of the two multiplications the performance should be betwen
92238384Sjkim# 1.91 and 2.16. As already mentioned, this implementation processes
93238384Sjkim# one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart
94238384Sjkim# - in 2.02. x86_64 performance is better, because larger register
95238384Sjkim# bank allows to interleave reduction and multiplication better.
96238384Sjkim#
97238384Sjkim# Does it make sense to increase Naggr? To start with it's virtually
98238384Sjkim# impossible in 32-bit mode, because of limited register bank
99238384Sjkim# capacity. Otherwise improvement has to be weighed agiainst slower
100238384Sjkim# setup, as well as code size and complexity increase. As even
101238384Sjkim# optimistic estimate doesn't promise 30% performance improvement,
102238384Sjkim# there are currently no plans to increase Naggr.
103238384Sjkim#
104238384Sjkim# Special thanks to David Woodhouse <dwmw2@infradead.org> for
105238384Sjkim# providing access to a Westmere-based system on behalf of Intel
106238384Sjkim# Open Source Technology Centre.
107238384Sjkim
108238384Sjkim# January 2010
109238384Sjkim#
110238384Sjkim# Tweaked to optimize transitions between integer and FP operations
111238384Sjkim# on same XMM register, PCLMULQDQ subroutine was measured to process
112238384Sjkim# one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere.
113238384Sjkim# The minor regression on Westmere is outweighed by ~15% improvement
114238384Sjkim# on Sandy Bridge. Strangely enough attempt to modify 64-bit code in
115238384Sjkim# similar manner resulted in almost 20% degradation on Sandy Bridge,
116238384Sjkim# where original 64-bit code processes one byte in 1.95 cycles.
117238384Sjkim
118290207Sjkim#####################################################################
119290207Sjkim# For reference, AMD Bulldozer processes one byte in 1.98 cycles in
120290207Sjkim# 32-bit mode and 1.89 in 64-bit.
121290207Sjkim
122290207Sjkim# February 2013
123290207Sjkim#
124290207Sjkim# Overhaul: aggregate Karatsuba post-processing, improve ILP in
125290207Sjkim# reduction_alg9. Resulting performance is 1.96 cycles per byte on
126290207Sjkim# Westmere, 1.95 - on Sandy/Ivy Bridge, 1.76 - on Bulldozer.
127290207Sjkim
128238384Sjkim$0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
129238384Sjkimpush(@INC,"${dir}","${dir}../../perlasm");
130238384Sjkimrequire "x86asm.pl";
131238384Sjkim
132238384Sjkim&asm_init($ARGV[0],"ghash-x86.pl",$x86only = $ARGV[$#ARGV] eq "386");
133238384Sjkim
134238384Sjkim$sse2=0;
135238384Sjkimfor (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
136238384Sjkim
137238384Sjkim($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
138238384Sjkim$inp  = "edi";
139238384Sjkim$Htbl = "esi";
140238384Sjkim
141238384Sjkim$unroll = 0;	# Affects x86 loop. Folded loop performs ~7% worse
142238384Sjkim		# than unrolled, which has to be weighted against
143238384Sjkim		# 2.5x x86-specific code size reduction.
144238384Sjkim
145238384Sjkimsub x86_loop {
146238384Sjkim    my $off = shift;
147238384Sjkim    my $rem = "eax";
148238384Sjkim
149238384Sjkim	&mov	($Zhh,&DWP(4,$Htbl,$Zll));
150238384Sjkim	&mov	($Zhl,&DWP(0,$Htbl,$Zll));
151238384Sjkim	&mov	($Zlh,&DWP(12,$Htbl,$Zll));
152238384Sjkim	&mov	($Zll,&DWP(8,$Htbl,$Zll));
153238384Sjkim	&xor	($rem,$rem);	# avoid partial register stalls on PIII
154238384Sjkim
155238384Sjkim	# shrd practically kills P4, 2.5x deterioration, but P4 has
156238384Sjkim	# MMX code-path to execute. shrd runs tad faster [than twice
157238384Sjkim	# the shifts, move's and or's] on pre-MMX Pentium (as well as
158238384Sjkim	# PIII and Core2), *but* minimizes code size, spares register
159238384Sjkim	# and thus allows to fold the loop...
160238384Sjkim	if (!$unroll) {
161238384Sjkim	my $cnt = $inp;
162238384Sjkim	&mov	($cnt,15);
163238384Sjkim	&jmp	(&label("x86_loop"));
164238384Sjkim	&set_label("x86_loop",16);
165238384Sjkim	    for($i=1;$i<=2;$i++) {
166238384Sjkim		&mov	(&LB($rem),&LB($Zll));
167238384Sjkim		&shrd	($Zll,$Zlh,4);
168238384Sjkim		&and	(&LB($rem),0xf);
169238384Sjkim		&shrd	($Zlh,$Zhl,4);
170238384Sjkim		&shrd	($Zhl,$Zhh,4);
171238384Sjkim		&shr	($Zhh,4);
172238384Sjkim		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
173238384Sjkim
174238384Sjkim		&mov	(&LB($rem),&BP($off,"esp",$cnt));
175238384Sjkim		if ($i&1) {
176238384Sjkim			&and	(&LB($rem),0xf0);
177238384Sjkim		} else {
178238384Sjkim			&shl	(&LB($rem),4);
179238384Sjkim		}
180238384Sjkim
181238384Sjkim		&xor	($Zll,&DWP(8,$Htbl,$rem));
182238384Sjkim		&xor	($Zlh,&DWP(12,$Htbl,$rem));
183238384Sjkim		&xor	($Zhl,&DWP(0,$Htbl,$rem));
184238384Sjkim		&xor	($Zhh,&DWP(4,$Htbl,$rem));
185238384Sjkim
186238384Sjkim		if ($i&1) {
187238384Sjkim			&dec	($cnt);
188238384Sjkim			&js	(&label("x86_break"));
189238384Sjkim		} else {
190238384Sjkim			&jmp	(&label("x86_loop"));
191238384Sjkim		}
192238384Sjkim	    }
193238384Sjkim	&set_label("x86_break",16);
194238384Sjkim	} else {
195238384Sjkim	    for($i=1;$i<32;$i++) {
196238384Sjkim		&comment($i);
197238384Sjkim		&mov	(&LB($rem),&LB($Zll));
198238384Sjkim		&shrd	($Zll,$Zlh,4);
199238384Sjkim		&and	(&LB($rem),0xf);
200238384Sjkim		&shrd	($Zlh,$Zhl,4);
201238384Sjkim		&shrd	($Zhl,$Zhh,4);
202238384Sjkim		&shr	($Zhh,4);
203238384Sjkim		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
204238384Sjkim
205238384Sjkim		if ($i&1) {
206238384Sjkim			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
207238384Sjkim			&and	(&LB($rem),0xf0);
208238384Sjkim		} else {
209238384Sjkim			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
210238384Sjkim			&shl	(&LB($rem),4);
211238384Sjkim		}
212238384Sjkim
213238384Sjkim		&xor	($Zll,&DWP(8,$Htbl,$rem));
214238384Sjkim		&xor	($Zlh,&DWP(12,$Htbl,$rem));
215238384Sjkim		&xor	($Zhl,&DWP(0,$Htbl,$rem));
216238384Sjkim		&xor	($Zhh,&DWP(4,$Htbl,$rem));
217238384Sjkim	    }
218238384Sjkim	}
219238384Sjkim	&bswap	($Zll);
220238384Sjkim	&bswap	($Zlh);
221238384Sjkim	&bswap	($Zhl);
222238384Sjkim	if (!$x86only) {
223238384Sjkim		&bswap	($Zhh);
224238384Sjkim	} else {
225238384Sjkim		&mov	("eax",$Zhh);
226238384Sjkim		&bswap	("eax");
227238384Sjkim		&mov	($Zhh,"eax");
228238384Sjkim	}
229238384Sjkim}
230238384Sjkim
231238384Sjkimif ($unroll) {
232238384Sjkim    &function_begin_B("_x86_gmult_4bit_inner");
233238384Sjkim	&x86_loop(4);
234238384Sjkim	&ret	();
235238384Sjkim    &function_end_B("_x86_gmult_4bit_inner");
236238384Sjkim}
237238384Sjkim
238238384Sjkimsub deposit_rem_4bit {
239238384Sjkim    my $bias = shift;
240238384Sjkim
241238384Sjkim	&mov	(&DWP($bias+0, "esp"),0x0000<<16);
242238384Sjkim	&mov	(&DWP($bias+4, "esp"),0x1C20<<16);
243238384Sjkim	&mov	(&DWP($bias+8, "esp"),0x3840<<16);
244238384Sjkim	&mov	(&DWP($bias+12,"esp"),0x2460<<16);
245238384Sjkim	&mov	(&DWP($bias+16,"esp"),0x7080<<16);
246238384Sjkim	&mov	(&DWP($bias+20,"esp"),0x6CA0<<16);
247238384Sjkim	&mov	(&DWP($bias+24,"esp"),0x48C0<<16);
248238384Sjkim	&mov	(&DWP($bias+28,"esp"),0x54E0<<16);
249238384Sjkim	&mov	(&DWP($bias+32,"esp"),0xE100<<16);
250238384Sjkim	&mov	(&DWP($bias+36,"esp"),0xFD20<<16);
251238384Sjkim	&mov	(&DWP($bias+40,"esp"),0xD940<<16);
252238384Sjkim	&mov	(&DWP($bias+44,"esp"),0xC560<<16);
253238384Sjkim	&mov	(&DWP($bias+48,"esp"),0x9180<<16);
254238384Sjkim	&mov	(&DWP($bias+52,"esp"),0x8DA0<<16);
255238384Sjkim	&mov	(&DWP($bias+56,"esp"),0xA9C0<<16);
256238384Sjkim	&mov	(&DWP($bias+60,"esp"),0xB5E0<<16);
257238384Sjkim}
258238384Sjkim
259238384Sjkim$suffix = $x86only ? "" : "_x86";
260238384Sjkim
261238384Sjkim&function_begin("gcm_gmult_4bit".$suffix);
262238384Sjkim	&stack_push(16+4+1);			# +1 for stack alignment
263238384Sjkim	&mov	($inp,&wparam(0));		# load Xi
264238384Sjkim	&mov	($Htbl,&wparam(1));		# load Htable
265238384Sjkim
266238384Sjkim	&mov	($Zhh,&DWP(0,$inp));		# load Xi[16]
267238384Sjkim	&mov	($Zhl,&DWP(4,$inp));
268238384Sjkim	&mov	($Zlh,&DWP(8,$inp));
269238384Sjkim	&mov	($Zll,&DWP(12,$inp));
270238384Sjkim
271238384Sjkim	&deposit_rem_4bit(16);
272238384Sjkim
273238384Sjkim	&mov	(&DWP(0,"esp"),$Zhh);		# copy Xi[16] on stack
274238384Sjkim	&mov	(&DWP(4,"esp"),$Zhl);
275238384Sjkim	&mov	(&DWP(8,"esp"),$Zlh);
276238384Sjkim	&mov	(&DWP(12,"esp"),$Zll);
277238384Sjkim	&shr	($Zll,20);
278238384Sjkim	&and	($Zll,0xf0);
279238384Sjkim
280238384Sjkim	if ($unroll) {
281238384Sjkim		&call	("_x86_gmult_4bit_inner");
282238384Sjkim	} else {
283238384Sjkim		&x86_loop(0);
284238384Sjkim		&mov	($inp,&wparam(0));
285238384Sjkim	}
286238384Sjkim
287238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
288238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
289238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
290238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
291238384Sjkim	&stack_pop(16+4+1);
292238384Sjkim&function_end("gcm_gmult_4bit".$suffix);
293238384Sjkim
294238384Sjkim&function_begin("gcm_ghash_4bit".$suffix);
295238384Sjkim	&stack_push(16+4+1);			# +1 for 64-bit alignment
296238384Sjkim	&mov	($Zll,&wparam(0));		# load Xi
297238384Sjkim	&mov	($Htbl,&wparam(1));		# load Htable
298238384Sjkim	&mov	($inp,&wparam(2));		# load in
299238384Sjkim	&mov	("ecx",&wparam(3));		# load len
300238384Sjkim	&add	("ecx",$inp);
301238384Sjkim	&mov	(&wparam(3),"ecx");
302238384Sjkim
303238384Sjkim	&mov	($Zhh,&DWP(0,$Zll));		# load Xi[16]
304238384Sjkim	&mov	($Zhl,&DWP(4,$Zll));
305238384Sjkim	&mov	($Zlh,&DWP(8,$Zll));
306238384Sjkim	&mov	($Zll,&DWP(12,$Zll));
307238384Sjkim
308238384Sjkim	&deposit_rem_4bit(16);
309238384Sjkim
310238384Sjkim    &set_label("x86_outer_loop",16);
311238384Sjkim	&xor	($Zll,&DWP(12,$inp));		# xor with input
312238384Sjkim	&xor	($Zlh,&DWP(8,$inp));
313238384Sjkim	&xor	($Zhl,&DWP(4,$inp));
314238384Sjkim	&xor	($Zhh,&DWP(0,$inp));
315238384Sjkim	&mov	(&DWP(12,"esp"),$Zll);		# dump it on stack
316238384Sjkim	&mov	(&DWP(8,"esp"),$Zlh);
317238384Sjkim	&mov	(&DWP(4,"esp"),$Zhl);
318238384Sjkim	&mov	(&DWP(0,"esp"),$Zhh);
319238384Sjkim
320238384Sjkim	&shr	($Zll,20);
321238384Sjkim	&and	($Zll,0xf0);
322238384Sjkim
323238384Sjkim	if ($unroll) {
324238384Sjkim		&call	("_x86_gmult_4bit_inner");
325238384Sjkim	} else {
326238384Sjkim		&x86_loop(0);
327238384Sjkim		&mov	($inp,&wparam(2));
328238384Sjkim	}
329238384Sjkim	&lea	($inp,&DWP(16,$inp));
330238384Sjkim	&cmp	($inp,&wparam(3));
331238384Sjkim	&mov	(&wparam(2),$inp)	if (!$unroll);
332238384Sjkim	&jb	(&label("x86_outer_loop"));
333238384Sjkim
334238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
335238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
336238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
337238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
338238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
339238384Sjkim	&stack_pop(16+4+1);
340238384Sjkim&function_end("gcm_ghash_4bit".$suffix);
341238384Sjkim
342238384Sjkimif (!$x86only) {{{
343238384Sjkim
344238384Sjkim&static_label("rem_4bit");
345238384Sjkim
346238384Sjkimif (!$sse2) {{	# pure-MMX "May" version...
347238384Sjkim
348238384Sjkim$S=12;		# shift factor for rem_4bit
349238384Sjkim
350238384Sjkim&function_begin_B("_mmx_gmult_4bit_inner");
351238384Sjkim# MMX version performs 3.5 times better on P4 (see comment in non-MMX
352238384Sjkim# routine for further details), 100% better on Opteron, ~70% better
353238384Sjkim# on Core2 and PIII... In other words effort is considered to be well
354238384Sjkim# spent... Since initial release the loop was unrolled in order to
355238384Sjkim# "liberate" register previously used as loop counter. Instead it's
356238384Sjkim# used to optimize critical path in 'Z.hi ^= rem_4bit[Z.lo&0xf]'.
357238384Sjkim# The path involves move of Z.lo from MMX to integer register,
358238384Sjkim# effective address calculation and finally merge of value to Z.hi.
359238384Sjkim# Reference to rem_4bit is scheduled so late that I had to >>4
360238384Sjkim# rem_4bit elements. This resulted in 20-45% procent improvement
361291719Sjkim# on contemporary ��-archs.
362238384Sjkim{
363238384Sjkim    my $cnt;
364238384Sjkim    my $rem_4bit = "eax";
365238384Sjkim    my @rem = ($Zhh,$Zll);
366238384Sjkim    my $nhi = $Zhl;
367238384Sjkim    my $nlo = $Zlh;
368238384Sjkim
369238384Sjkim    my ($Zlo,$Zhi) = ("mm0","mm1");
370238384Sjkim    my $tmp = "mm2";
371238384Sjkim
372238384Sjkim	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
373238384Sjkim	&mov	($nhi,$Zll);
374238384Sjkim	&mov	(&LB($nlo),&LB($nhi));
375238384Sjkim	&shl	(&LB($nlo),4);
376238384Sjkim	&and	($nhi,0xf0);
377238384Sjkim	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
378238384Sjkim	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
379238384Sjkim	&movd	($rem[0],$Zlo);
380238384Sjkim
381238384Sjkim	for ($cnt=28;$cnt>=-2;$cnt--) {
382238384Sjkim	    my $odd = $cnt&1;
383238384Sjkim	    my $nix = $odd ? $nlo : $nhi;
384238384Sjkim
385238384Sjkim		&shl	(&LB($nlo),4)			if ($odd);
386238384Sjkim		&psrlq	($Zlo,4);
387238384Sjkim		&movq	($tmp,$Zhi);
388238384Sjkim		&psrlq	($Zhi,4);
389238384Sjkim		&pxor	($Zlo,&QWP(8,$Htbl,$nix));
390238384Sjkim		&mov	(&LB($nlo),&BP($cnt/2,$inp))	if (!$odd && $cnt>=0);
391238384Sjkim		&psllq	($tmp,60);
392238384Sjkim		&and	($nhi,0xf0)			if ($odd);
393238384Sjkim		&pxor	($Zhi,&QWP(0,$rem_4bit,$rem[1],8)) if ($cnt<28);
394238384Sjkim		&and	($rem[0],0xf);
395238384Sjkim		&pxor	($Zhi,&QWP(0,$Htbl,$nix));
396238384Sjkim		&mov	($nhi,$nlo)			if (!$odd && $cnt>=0);
397238384Sjkim		&movd	($rem[1],$Zlo);
398238384Sjkim		&pxor	($Zlo,$tmp);
399238384Sjkim
400238384Sjkim		push	(@rem,shift(@rem));		# "rotate" registers
401238384Sjkim	}
402238384Sjkim
403238384Sjkim	&mov	($inp,&DWP(4,$rem_4bit,$rem[1],8));	# last rem_4bit[rem]
404238384Sjkim
405238384Sjkim	&psrlq	($Zlo,32);	# lower part of Zlo is already there
406238384Sjkim	&movd	($Zhl,$Zhi);
407238384Sjkim	&psrlq	($Zhi,32);
408238384Sjkim	&movd	($Zlh,$Zlo);
409238384Sjkim	&movd	($Zhh,$Zhi);
410238384Sjkim	&shl	($inp,4);	# compensate for rem_4bit[i] being >>4
411238384Sjkim
412238384Sjkim	&bswap	($Zll);
413238384Sjkim	&bswap	($Zhl);
414238384Sjkim	&bswap	($Zlh);
415238384Sjkim	&xor	($Zhh,$inp);
416238384Sjkim	&bswap	($Zhh);
417238384Sjkim
418238384Sjkim	&ret	();
419238384Sjkim}
420238384Sjkim&function_end_B("_mmx_gmult_4bit_inner");
421238384Sjkim
422238384Sjkim&function_begin("gcm_gmult_4bit_mmx");
423238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
424238384Sjkim	&mov	($Htbl,&wparam(1));	# load Htable
425238384Sjkim
426238384Sjkim	&call	(&label("pic_point"));
427238384Sjkim	&set_label("pic_point");
428238384Sjkim	&blindpop("eax");
429238384Sjkim	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
430238384Sjkim
431238384Sjkim	&movz	($Zll,&BP(15,$inp));
432238384Sjkim
433238384Sjkim	&call	("_mmx_gmult_4bit_inner");
434238384Sjkim
435238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
436238384Sjkim	&emms	();
437238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
438238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
439238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
440238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
441238384Sjkim&function_end("gcm_gmult_4bit_mmx");
442238384Sjkim
443238384Sjkim# Streamed version performs 20% better on P4, 7% on Opteron,
444238384Sjkim# 10% on Core2 and PIII...
445238384Sjkim&function_begin("gcm_ghash_4bit_mmx");
446238384Sjkim	&mov	($Zhh,&wparam(0));	# load Xi
447238384Sjkim	&mov	($Htbl,&wparam(1));	# load Htable
448238384Sjkim	&mov	($inp,&wparam(2));	# load in
449238384Sjkim	&mov	($Zlh,&wparam(3));	# load len
450238384Sjkim
451238384Sjkim	&call	(&label("pic_point"));
452238384Sjkim	&set_label("pic_point");
453238384Sjkim	&blindpop("eax");
454238384Sjkim	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
455238384Sjkim
456238384Sjkim	&add	($Zlh,$inp);
457238384Sjkim	&mov	(&wparam(3),$Zlh);	# len to point at the end of input
458238384Sjkim	&stack_push(4+1);		# +1 for stack alignment
459238384Sjkim
460238384Sjkim	&mov	($Zll,&DWP(12,$Zhh));	# load Xi[16]
461238384Sjkim	&mov	($Zhl,&DWP(4,$Zhh));
462238384Sjkim	&mov	($Zlh,&DWP(8,$Zhh));
463238384Sjkim	&mov	($Zhh,&DWP(0,$Zhh));
464238384Sjkim	&jmp	(&label("mmx_outer_loop"));
465238384Sjkim
466238384Sjkim    &set_label("mmx_outer_loop",16);
467238384Sjkim	&xor	($Zll,&DWP(12,$inp));
468238384Sjkim	&xor	($Zhl,&DWP(4,$inp));
469238384Sjkim	&xor	($Zlh,&DWP(8,$inp));
470238384Sjkim	&xor	($Zhh,&DWP(0,$inp));
471238384Sjkim	&mov	(&wparam(2),$inp);
472238384Sjkim	&mov	(&DWP(12,"esp"),$Zll);
473238384Sjkim	&mov	(&DWP(4,"esp"),$Zhl);
474238384Sjkim	&mov	(&DWP(8,"esp"),$Zlh);
475238384Sjkim	&mov	(&DWP(0,"esp"),$Zhh);
476238384Sjkim
477238384Sjkim	&mov	($inp,"esp");
478238384Sjkim	&shr	($Zll,24);
479238384Sjkim
480238384Sjkim	&call	("_mmx_gmult_4bit_inner");
481238384Sjkim
482238384Sjkim	&mov	($inp,&wparam(2));
483238384Sjkim	&lea	($inp,&DWP(16,$inp));
484238384Sjkim	&cmp	($inp,&wparam(3));
485238384Sjkim	&jb	(&label("mmx_outer_loop"));
486238384Sjkim
487238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
488238384Sjkim	&emms	();
489238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
490238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
491238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
492238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
493238384Sjkim
494238384Sjkim	&stack_pop(4+1);
495238384Sjkim&function_end("gcm_ghash_4bit_mmx");
496238384Sjkim
497238384Sjkim}} else {{	# "June" MMX version...
498238384Sjkim		# ... has slower "April" gcm_gmult_4bit_mmx with folded
499238384Sjkim		# loop. This is done to conserve code size...
500238384Sjkim$S=16;		# shift factor for rem_4bit
501238384Sjkim
502238384Sjkimsub mmx_loop() {
503238384Sjkim# MMX version performs 2.8 times better on P4 (see comment in non-MMX
504238384Sjkim# routine for further details), 40% better on Opteron and Core2, 50%
505238384Sjkim# better on PIII... In other words effort is considered to be well
506238384Sjkim# spent...
507238384Sjkim    my $inp = shift;
508238384Sjkim    my $rem_4bit = shift;
509238384Sjkim    my $cnt = $Zhh;
510238384Sjkim    my $nhi = $Zhl;
511238384Sjkim    my $nlo = $Zlh;
512238384Sjkim    my $rem = $Zll;
513238384Sjkim
514238384Sjkim    my ($Zlo,$Zhi) = ("mm0","mm1");
515238384Sjkim    my $tmp = "mm2";
516238384Sjkim
517238384Sjkim	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
518238384Sjkim	&mov	($nhi,$Zll);
519238384Sjkim	&mov	(&LB($nlo),&LB($nhi));
520238384Sjkim	&mov	($cnt,14);
521238384Sjkim	&shl	(&LB($nlo),4);
522238384Sjkim	&and	($nhi,0xf0);
523238384Sjkim	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
524238384Sjkim	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
525238384Sjkim	&movd	($rem,$Zlo);
526238384Sjkim	&jmp	(&label("mmx_loop"));
527238384Sjkim
528238384Sjkim    &set_label("mmx_loop",16);
529238384Sjkim	&psrlq	($Zlo,4);
530238384Sjkim	&and	($rem,0xf);
531238384Sjkim	&movq	($tmp,$Zhi);
532238384Sjkim	&psrlq	($Zhi,4);
533238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
534238384Sjkim	&mov	(&LB($nlo),&BP(0,$inp,$cnt));
535238384Sjkim	&psllq	($tmp,60);
536238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
537238384Sjkim	&dec	($cnt);
538238384Sjkim	&movd	($rem,$Zlo);
539238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
540238384Sjkim	&mov	($nhi,$nlo);
541238384Sjkim	&pxor	($Zlo,$tmp);
542238384Sjkim	&js	(&label("mmx_break"));
543238384Sjkim
544238384Sjkim	&shl	(&LB($nlo),4);
545238384Sjkim	&and	($rem,0xf);
546238384Sjkim	&psrlq	($Zlo,4);
547238384Sjkim	&and	($nhi,0xf0);
548238384Sjkim	&movq	($tmp,$Zhi);
549238384Sjkim	&psrlq	($Zhi,4);
550238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
551238384Sjkim	&psllq	($tmp,60);
552238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
553238384Sjkim	&movd	($rem,$Zlo);
554238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
555238384Sjkim	&pxor	($Zlo,$tmp);
556238384Sjkim	&jmp	(&label("mmx_loop"));
557238384Sjkim
558238384Sjkim    &set_label("mmx_break",16);
559238384Sjkim	&shl	(&LB($nlo),4);
560238384Sjkim	&and	($rem,0xf);
561238384Sjkim	&psrlq	($Zlo,4);
562238384Sjkim	&and	($nhi,0xf0);
563238384Sjkim	&movq	($tmp,$Zhi);
564238384Sjkim	&psrlq	($Zhi,4);
565238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
566238384Sjkim	&psllq	($tmp,60);
567238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
568238384Sjkim	&movd	($rem,$Zlo);
569238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
570238384Sjkim	&pxor	($Zlo,$tmp);
571238384Sjkim
572238384Sjkim	&psrlq	($Zlo,4);
573238384Sjkim	&and	($rem,0xf);
574238384Sjkim	&movq	($tmp,$Zhi);
575238384Sjkim	&psrlq	($Zhi,4);
576238384Sjkim	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
577238384Sjkim	&psllq	($tmp,60);
578238384Sjkim	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
579238384Sjkim	&movd	($rem,$Zlo);
580238384Sjkim	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
581238384Sjkim	&pxor	($Zlo,$tmp);
582238384Sjkim
583238384Sjkim	&psrlq	($Zlo,32);	# lower part of Zlo is already there
584238384Sjkim	&movd	($Zhl,$Zhi);
585238384Sjkim	&psrlq	($Zhi,32);
586238384Sjkim	&movd	($Zlh,$Zlo);
587238384Sjkim	&movd	($Zhh,$Zhi);
588238384Sjkim
589238384Sjkim	&bswap	($Zll);
590238384Sjkim	&bswap	($Zhl);
591238384Sjkim	&bswap	($Zlh);
592238384Sjkim	&bswap	($Zhh);
593238384Sjkim}
594238384Sjkim
595238384Sjkim&function_begin("gcm_gmult_4bit_mmx");
596238384Sjkim	&mov	($inp,&wparam(0));	# load Xi
597238384Sjkim	&mov	($Htbl,&wparam(1));	# load Htable
598238384Sjkim
599238384Sjkim	&call	(&label("pic_point"));
600238384Sjkim	&set_label("pic_point");
601238384Sjkim	&blindpop("eax");
602238384Sjkim	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
603238384Sjkim
604238384Sjkim	&movz	($Zll,&BP(15,$inp));
605238384Sjkim
606238384Sjkim	&mmx_loop($inp,"eax");
607238384Sjkim
608238384Sjkim	&emms	();
609238384Sjkim	&mov	(&DWP(12,$inp),$Zll);
610238384Sjkim	&mov	(&DWP(4,$inp),$Zhl);
611238384Sjkim	&mov	(&DWP(8,$inp),$Zlh);
612238384Sjkim	&mov	(&DWP(0,$inp),$Zhh);
613238384Sjkim&function_end("gcm_gmult_4bit_mmx");
614238384Sjkim
615238384Sjkim######################################################################
616238384Sjkim# Below subroutine is "528B" variant of "4-bit" GCM GHASH function
617238384Sjkim# (see gcm128.c for details). It provides further 20-40% performance
618238384Sjkim# improvement over above mentioned "May" version.
619238384Sjkim
620238384Sjkim&static_label("rem_8bit");
621238384Sjkim
622238384Sjkim&function_begin("gcm_ghash_4bit_mmx");
623238384Sjkim{ my ($Zlo,$Zhi) = ("mm7","mm6");
624238384Sjkim  my $rem_8bit = "esi";
625238384Sjkim  my $Htbl = "ebx";
626238384Sjkim
627238384Sjkim    # parameter block
628238384Sjkim    &mov	("eax",&wparam(0));		# Xi
629238384Sjkim    &mov	("ebx",&wparam(1));		# Htable
630238384Sjkim    &mov	("ecx",&wparam(2));		# inp
631238384Sjkim    &mov	("edx",&wparam(3));		# len
632238384Sjkim    &mov	("ebp","esp");			# original %esp
633238384Sjkim    &call	(&label("pic_point"));
634238384Sjkim    &set_label	("pic_point");
635238384Sjkim    &blindpop	($rem_8bit);
636238384Sjkim    &lea	($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit));
637238384Sjkim
638238384Sjkim    &sub	("esp",512+16+16);		# allocate stack frame...
639238384Sjkim    &and	("esp",-64);			# ...and align it
640238384Sjkim    &sub	("esp",16);			# place for (u8)(H[]<<4)
641238384Sjkim
642238384Sjkim    &add	("edx","ecx");			# pointer to the end of input
643238384Sjkim    &mov	(&DWP(528+16+0,"esp"),"eax");	# save Xi
644238384Sjkim    &mov	(&DWP(528+16+8,"esp"),"edx");	# save inp+len
645238384Sjkim    &mov	(&DWP(528+16+12,"esp"),"ebp");	# save original %esp
646238384Sjkim
647238384Sjkim    { my @lo  = ("mm0","mm1","mm2");
648238384Sjkim      my @hi  = ("mm3","mm4","mm5");
649238384Sjkim      my @tmp = ("mm6","mm7");
650246772Sjkim      my ($off1,$off2,$i) = (0,0,);
651238384Sjkim
652238384Sjkim      &add	($Htbl,128);			# optimize for size
653238384Sjkim      &lea	("edi",&DWP(16+128,"esp"));
654238384Sjkim      &lea	("ebp",&DWP(16+256+128,"esp"));
655238384Sjkim
656238384Sjkim      # decompose Htable (low and high parts are kept separately),
657238384Sjkim      # generate Htable[]>>4, (u8)(Htable[]<<4), save to stack...
658238384Sjkim      for ($i=0;$i<18;$i++) {
659238384Sjkim
660238384Sjkim	&mov	("edx",&DWP(16*$i+8-128,$Htbl))		if ($i<16);
661238384Sjkim	&movq	($lo[0],&QWP(16*$i+8-128,$Htbl))	if ($i<16);
662238384Sjkim	&psllq	($tmp[1],60)				if ($i>1);
663238384Sjkim	&movq	($hi[0],&QWP(16*$i+0-128,$Htbl))	if ($i<16);
664238384Sjkim	&por	($lo[2],$tmp[1])			if ($i>1);
665238384Sjkim	&movq	(&QWP($off1-128,"edi"),$lo[1])		if ($i>0 && $i<17);
666238384Sjkim	&psrlq	($lo[1],4)				if ($i>0 && $i<17);
667238384Sjkim	&movq	(&QWP($off1,"edi"),$hi[1])		if ($i>0 && $i<17);
668238384Sjkim	&movq	($tmp[0],$hi[1])			if ($i>0 && $i<17);
669238384Sjkim	&movq	(&QWP($off2-128,"ebp"),$lo[2])		if ($i>1);
670238384Sjkim	&psrlq	($hi[1],4)				if ($i>0 && $i<17);
671238384Sjkim	&movq	(&QWP($off2,"ebp"),$hi[2])		if ($i>1);
672238384Sjkim	&shl	("edx",4)				if ($i<16);
673238384Sjkim	&mov	(&BP($i,"esp"),&LB("edx"))		if ($i<16);
674238384Sjkim
675238384Sjkim	unshift	(@lo,pop(@lo));			# "rotate" registers
676238384Sjkim	unshift	(@hi,pop(@hi));
677238384Sjkim	unshift	(@tmp,pop(@tmp));
678238384Sjkim	$off1 += 8	if ($i>0);
679238384Sjkim	$off2 += 8	if ($i>1);
680238384Sjkim      }
681238384Sjkim    }
682238384Sjkim
683238384Sjkim    &movq	($Zhi,&QWP(0,"eax"));
684238384Sjkim    &mov	("ebx",&DWP(8,"eax"));
685238384Sjkim    &mov	("edx",&DWP(12,"eax"));		# load Xi
686238384Sjkim
687238384Sjkim&set_label("outer",16);
688238384Sjkim  { my $nlo = "eax";
689238384Sjkim    my $dat = "edx";
690238384Sjkim    my @nhi = ("edi","ebp");
691238384Sjkim    my @rem = ("ebx","ecx");
692238384Sjkim    my @red = ("mm0","mm1","mm2");
693238384Sjkim    my $tmp = "mm3";
694238384Sjkim
695238384Sjkim    &xor	($dat,&DWP(12,"ecx"));		# merge input data
696238384Sjkim    &xor	("ebx",&DWP(8,"ecx"));
697238384Sjkim    &pxor	($Zhi,&QWP(0,"ecx"));
698238384Sjkim    &lea	("ecx",&DWP(16,"ecx"));		# inp+=16
699238384Sjkim    #&mov	(&DWP(528+12,"esp"),$dat);	# save inp^Xi
700238384Sjkim    &mov	(&DWP(528+8,"esp"),"ebx");
701238384Sjkim    &movq	(&QWP(528+0,"esp"),$Zhi);
702238384Sjkim    &mov	(&DWP(528+16+4,"esp"),"ecx");	# save inp
703238384Sjkim
704238384Sjkim    &xor	($nlo,$nlo);
705238384Sjkim    &rol	($dat,8);
706238384Sjkim    &mov	(&LB($nlo),&LB($dat));
707238384Sjkim    &mov	($nhi[1],$nlo);
708238384Sjkim    &and	(&LB($nlo),0x0f);
709238384Sjkim    &shr	($nhi[1],4);
710238384Sjkim    &pxor	($red[0],$red[0]);
711238384Sjkim    &rol	($dat,8);			# next byte
712238384Sjkim    &pxor	($red[1],$red[1]);
713238384Sjkim    &pxor	($red[2],$red[2]);
714238384Sjkim
715238384Sjkim    # Just like in "May" verson modulo-schedule for critical path in
716238384Sjkim    # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final 'pxor'
717238384Sjkim    # is scheduled so late that rem_8bit[] has to be shifted *right*
718238384Sjkim    # by 16, which is why last argument to pinsrw is 2, which
719238384Sjkim    # corresponds to <<32=<<48>>16...
720238384Sjkim    for ($j=11,$i=0;$i<15;$i++) {
721238384Sjkim
722238384Sjkim      if ($i>0) {
723238384Sjkim	&pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
724238384Sjkim	&rol	($dat,8);				# next byte
725238384Sjkim	&pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
726238384Sjkim
727238384Sjkim	&pxor	($Zlo,$tmp);
728238384Sjkim	&pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
729238384Sjkim	&xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
730238384Sjkim      } else {
731238384Sjkim	&movq	($Zlo,&QWP(16,"esp",$nlo,8));
732238384Sjkim	&movq	($Zhi,&QWP(16+128,"esp",$nlo,8));
733238384Sjkim      }
734238384Sjkim
735238384Sjkim	&mov	(&LB($nlo),&LB($dat));
736238384Sjkim	&mov	($dat,&DWP(528+$j,"esp"))		if (--$j%4==0);
737238384Sjkim
738238384Sjkim	&movd	($rem[0],$Zlo);
739238384Sjkim	&movz	($rem[1],&LB($rem[1]))			if ($i>0);
740238384Sjkim	&psrlq	($Zlo,8);				# Z>>=8
741238384Sjkim
742238384Sjkim	&movq	($tmp,$Zhi);
743238384Sjkim	&mov	($nhi[0],$nlo);
744238384Sjkim	&psrlq	($Zhi,8);
745238384Sjkim
746238384Sjkim	&pxor	($Zlo,&QWP(16+256+0,"esp",$nhi[1],8));	# Z^=H[nhi]>>4
747238384Sjkim	&and	(&LB($nlo),0x0f);
748238384Sjkim	&psllq	($tmp,56);
749238384Sjkim
750238384Sjkim	&pxor	($Zhi,$red[1])				if ($i>1);
751238384Sjkim	&shr	($nhi[0],4);
752238384Sjkim	&pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2)	if ($i>0);
753238384Sjkim
754238384Sjkim	unshift	(@red,pop(@red));			# "rotate" registers
755238384Sjkim	unshift	(@rem,pop(@rem));
756238384Sjkim	unshift	(@nhi,pop(@nhi));
757238384Sjkim    }
758238384Sjkim
759238384Sjkim    &pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
760238384Sjkim    &pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
761238384Sjkim    &xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
762238384Sjkim
763238384Sjkim    &pxor	($Zlo,$tmp);
764238384Sjkim    &pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
765238384Sjkim    &movz	($rem[1],&LB($rem[1]));
766238384Sjkim
767238384Sjkim    &pxor	($red[2],$red[2]);			# clear 2nd word
768238384Sjkim    &psllq	($red[1],4);
769238384Sjkim
770238384Sjkim    &movd	($rem[0],$Zlo);
771238384Sjkim    &psrlq	($Zlo,4);				# Z>>=4
772238384Sjkim
773238384Sjkim    &movq	($tmp,$Zhi);
774238384Sjkim    &psrlq	($Zhi,4);
775238384Sjkim    &shl	($rem[0],4);				# rem<<4
776238384Sjkim
777238384Sjkim    &pxor	($Zlo,&QWP(16,"esp",$nhi[1],8));	# Z^=H[nhi]
778238384Sjkim    &psllq	($tmp,60);
779238384Sjkim    &movz	($rem[0],&LB($rem[0]));
780238384Sjkim
781238384Sjkim    &pxor	($Zlo,$tmp);
782238384Sjkim    &pxor	($Zhi,&QWP(16+128,"esp",$nhi[1],8));
783238384Sjkim
784238384Sjkim    &pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2);
785238384Sjkim    &pxor	($Zhi,$red[1]);
786238384Sjkim
787238384Sjkim    &movd	($dat,$Zlo);
788238384Sjkim    &pinsrw	($red[2],&WP(0,$rem_8bit,$rem[0],2),3);	# last is <<48
789238384Sjkim
790238384Sjkim    &psllq	($red[0],12);				# correct by <<16>>4
791238384Sjkim    &pxor	($Zhi,$red[0]);
792238384Sjkim    &psrlq	($Zlo,32);
793238384Sjkim    &pxor	($Zhi,$red[2]);
794238384Sjkim
795238384Sjkim    &mov	("ecx",&DWP(528+16+4,"esp"));	# restore inp
796238384Sjkim    &movd	("ebx",$Zlo);
797238384Sjkim    &movq	($tmp,$Zhi);			# 01234567
798238384Sjkim    &psllw	($Zhi,8);			# 1.3.5.7.
799238384Sjkim    &psrlw	($tmp,8);			# .0.2.4.6
800238384Sjkim    &por	($Zhi,$tmp);			# 10325476
801238384Sjkim    &bswap	($dat);
802238384Sjkim    &pshufw	($Zhi,$Zhi,0b00011011);		# 76543210
803238384Sjkim    &bswap	("ebx");
804238384Sjkim
805238384Sjkim    &cmp	("ecx",&DWP(528+16+8,"esp"));	# are we done?
806238384Sjkim    &jne	(&label("outer"));
807238384Sjkim  }
808238384Sjkim
809238384Sjkim    &mov	("eax",&DWP(528+16+0,"esp"));	# restore Xi
810238384Sjkim    &mov	(&DWP(12,"eax"),"edx");
811238384Sjkim    &mov	(&DWP(8,"eax"),"ebx");
812238384Sjkim    &movq	(&QWP(0,"eax"),$Zhi);
813238384Sjkim
814238384Sjkim    &mov	("esp",&DWP(528+16+12,"esp"));	# restore original %esp
815238384Sjkim    &emms	();
816238384Sjkim}
817238384Sjkim&function_end("gcm_ghash_4bit_mmx");
818238384Sjkim}}
819238384Sjkim
820238384Sjkimif ($sse2) {{
821238384Sjkim######################################################################
822238384Sjkim# PCLMULQDQ version.
823238384Sjkim
824238384Sjkim$Xip="eax";
825238384Sjkim$Htbl="edx";
826238384Sjkim$const="ecx";
827238384Sjkim$inp="esi";
828238384Sjkim$len="ebx";
829238384Sjkim
830238384Sjkim($Xi,$Xhi)=("xmm0","xmm1");	$Hkey="xmm2";
831238384Sjkim($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
832238384Sjkim($Xn,$Xhn)=("xmm6","xmm7");
833238384Sjkim
834238384Sjkim&static_label("bswap");
835238384Sjkim
836238384Sjkimsub clmul64x64_T2 {	# minimal "register" pressure
837290207Sjkimmy ($Xhi,$Xi,$Hkey,$HK)=@_;
838238384Sjkim
839238384Sjkim	&movdqa		($Xhi,$Xi);		#
840238384Sjkim	&pshufd		($T1,$Xi,0b01001110);
841290207Sjkim	&pshufd		($T2,$Hkey,0b01001110)	if (!defined($HK));
842238384Sjkim	&pxor		($T1,$Xi);		#
843290207Sjkim	&pxor		($T2,$Hkey)		if (!defined($HK));
844290207Sjkim			$HK=$T2			if (!defined($HK));
845238384Sjkim
846238384Sjkim	&pclmulqdq	($Xi,$Hkey,0x00);	#######
847238384Sjkim	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
848290207Sjkim	&pclmulqdq	($T1,$HK,0x00);		#######
849238384Sjkim	&xorps		($T1,$Xi);		#
850238384Sjkim	&xorps		($T1,$Xhi);		#
851238384Sjkim
852238384Sjkim	&movdqa		($T2,$T1);		#
853238384Sjkim	&psrldq		($T1,8);
854238384Sjkim	&pslldq		($T2,8);		#
855238384Sjkim	&pxor		($Xhi,$T1);
856238384Sjkim	&pxor		($Xi,$T2);		#
857238384Sjkim}
858238384Sjkim
859238384Sjkimsub clmul64x64_T3 {
860238384Sjkim# Even though this subroutine offers visually better ILP, it
861238384Sjkim# was empirically found to be a tad slower than above version.
862238384Sjkim# At least in gcm_ghash_clmul context. But it's just as well,
863238384Sjkim# because loop modulo-scheduling is possible only thanks to
864238384Sjkim# minimized "register" pressure...
865238384Sjkimmy ($Xhi,$Xi,$Hkey)=@_;
866238384Sjkim
867238384Sjkim	&movdqa		($T1,$Xi);		#
868238384Sjkim	&movdqa		($Xhi,$Xi);
869238384Sjkim	&pclmulqdq	($Xi,$Hkey,0x00);	#######
870238384Sjkim	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
871238384Sjkim	&pshufd		($T2,$T1,0b01001110);	#
872238384Sjkim	&pshufd		($T3,$Hkey,0b01001110);
873238384Sjkim	&pxor		($T2,$T1);		#
874238384Sjkim	&pxor		($T3,$Hkey);
875238384Sjkim	&pclmulqdq	($T2,$T3,0x00);		#######
876238384Sjkim	&pxor		($T2,$Xi);		#
877238384Sjkim	&pxor		($T2,$Xhi);		#
878238384Sjkim
879238384Sjkim	&movdqa		($T3,$T2);		#
880238384Sjkim	&psrldq		($T2,8);
881238384Sjkim	&pslldq		($T3,8);		#
882238384Sjkim	&pxor		($Xhi,$T2);
883238384Sjkim	&pxor		($Xi,$T3);		#
884238384Sjkim}
885238384Sjkim
886238384Sjkimif (1) {		# Algorithm 9 with <<1 twist.
887238384Sjkim			# Reduction is shorter and uses only two
888238384Sjkim			# temporary registers, which makes it better
889238384Sjkim			# candidate for interleaving with 64x64
890238384Sjkim			# multiplication. Pre-modulo-scheduled loop
891238384Sjkim			# was found to be ~20% faster than Algorithm 5
892238384Sjkim			# below. Algorithm 9 was therefore chosen for
893238384Sjkim			# further optimization...
894238384Sjkim
895290207Sjkimsub reduction_alg9 {	# 17/11 times faster than Intel version
896238384Sjkimmy ($Xhi,$Xi) = @_;
897238384Sjkim
898238384Sjkim	# 1st phase
899290207Sjkim	&movdqa		($T2,$Xi);		#
900290207Sjkim	&movdqa		($T1,$Xi);
901290207Sjkim	&psllq		($Xi,5);
902290207Sjkim	&pxor		($T1,$Xi);		#
903238384Sjkim	&psllq		($Xi,1);
904238384Sjkim	&pxor		($Xi,$T1);		#
905238384Sjkim	&psllq		($Xi,57);		#
906290207Sjkim	&movdqa		($T1,$Xi);		#
907238384Sjkim	&pslldq		($Xi,8);
908290207Sjkim	&psrldq		($T1,8);		#
909290207Sjkim	&pxor		($Xi,$T2);
910290207Sjkim	&pxor		($Xhi,$T1);		#
911238384Sjkim
912238384Sjkim	# 2nd phase
913238384Sjkim	&movdqa		($T2,$Xi);
914290207Sjkim	&psrlq		($Xi,1);
915290207Sjkim	&pxor		($Xhi,$T2);		#
916290207Sjkim	&pxor		($T2,$Xi);
917238384Sjkim	&psrlq		($Xi,5);
918238384Sjkim	&pxor		($Xi,$T2);		#
919238384Sjkim	&psrlq		($Xi,1);		#
920290207Sjkim	&pxor		($Xi,$Xhi)		#
921238384Sjkim}
922238384Sjkim
923238384Sjkim&function_begin_B("gcm_init_clmul");
924238384Sjkim	&mov		($Htbl,&wparam(0));
925238384Sjkim	&mov		($Xip,&wparam(1));
926238384Sjkim
927238384Sjkim	&call		(&label("pic"));
928238384Sjkim&set_label("pic");
929238384Sjkim	&blindpop	($const);
930238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
931238384Sjkim
932238384Sjkim	&movdqu		($Hkey,&QWP(0,$Xip));
933238384Sjkim	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
934238384Sjkim
935238384Sjkim	# <<1 twist
936238384Sjkim	&pshufd		($T2,$Hkey,0b11111111);	# broadcast uppermost dword
937238384Sjkim	&movdqa		($T1,$Hkey);
938238384Sjkim	&psllq		($Hkey,1);
939238384Sjkim	&pxor		($T3,$T3);		#
940238384Sjkim	&psrlq		($T1,63);
941238384Sjkim	&pcmpgtd	($T3,$T2);		# broadcast carry bit
942238384Sjkim	&pslldq		($T1,8);
943238384Sjkim	&por		($Hkey,$T1);		# H<<=1
944238384Sjkim
945238384Sjkim	# magic reduction
946238384Sjkim	&pand		($T3,&QWP(16,$const));	# 0x1c2_polynomial
947238384Sjkim	&pxor		($Hkey,$T3);		# if(carry) H^=0x1c2_polynomial
948238384Sjkim
949238384Sjkim	# calculate H^2
950238384Sjkim	&movdqa		($Xi,$Hkey);
951238384Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey);
952238384Sjkim	&reduction_alg9	($Xhi,$Xi);
953238384Sjkim
954290207Sjkim	&pshufd		($T1,$Hkey,0b01001110);
955290207Sjkim	&pshufd		($T2,$Xi,0b01001110);
956290207Sjkim	&pxor		($T1,$Hkey);		# Karatsuba pre-processing
957238384Sjkim	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
958290207Sjkim	&pxor		($T2,$Xi);		# Karatsuba pre-processing
959238384Sjkim	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
960290207Sjkim	&palignr	($T2,$T1,8);		# low part is H.lo^H.hi
961290207Sjkim	&movdqu		(&QWP(32,$Htbl),$T2);	# save Karatsuba "salt"
962238384Sjkim
963238384Sjkim	&ret		();
964238384Sjkim&function_end_B("gcm_init_clmul");
965238384Sjkim
966238384Sjkim&function_begin_B("gcm_gmult_clmul");
967238384Sjkim	&mov		($Xip,&wparam(0));
968238384Sjkim	&mov		($Htbl,&wparam(1));
969238384Sjkim
970238384Sjkim	&call		(&label("pic"));
971238384Sjkim&set_label("pic");
972238384Sjkim	&blindpop	($const);
973238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
974238384Sjkim
975238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
976238384Sjkim	&movdqa		($T3,&QWP(0,$const));
977238384Sjkim	&movups		($Hkey,&QWP(0,$Htbl));
978238384Sjkim	&pshufb		($Xi,$T3);
979290207Sjkim	&movups		($T2,&QWP(32,$Htbl));
980238384Sjkim
981290207Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey,$T2);
982238384Sjkim	&reduction_alg9	($Xhi,$Xi);
983238384Sjkim
984238384Sjkim	&pshufb		($Xi,$T3);
985238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
986238384Sjkim
987238384Sjkim	&ret	();
988238384Sjkim&function_end_B("gcm_gmult_clmul");
989238384Sjkim
990238384Sjkim&function_begin("gcm_ghash_clmul");
991238384Sjkim	&mov		($Xip,&wparam(0));
992238384Sjkim	&mov		($Htbl,&wparam(1));
993238384Sjkim	&mov		($inp,&wparam(2));
994238384Sjkim	&mov		($len,&wparam(3));
995238384Sjkim
996238384Sjkim	&call		(&label("pic"));
997238384Sjkim&set_label("pic");
998238384Sjkim	&blindpop	($const);
999238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1000238384Sjkim
1001238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
1002238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1003238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));
1004238384Sjkim	&pshufb		($Xi,$T3);
1005238384Sjkim
1006238384Sjkim	&sub		($len,0x10);
1007238384Sjkim	&jz		(&label("odd_tail"));
1008238384Sjkim
1009238384Sjkim	#######
1010238384Sjkim	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1011238384Sjkim	#	[(H*Ii+1) + (H*Xi+1)] mod P =
1012238384Sjkim	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
1013238384Sjkim	#
1014238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1015238384Sjkim	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1016238384Sjkim	&pshufb		($T1,$T3);
1017238384Sjkim	&pshufb		($Xn,$T3);
1018290207Sjkim	&movdqu		($T3,&QWP(32,$Htbl));
1019238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1020238384Sjkim
1021290207Sjkim	&pshufd		($T1,$Xn,0b01001110);	# H*Ii+1
1022290207Sjkim	&movdqa		($Xhn,$Xn);
1023290207Sjkim	&pxor		($T1,$Xn);		#
1024290207Sjkim	&lea		($inp,&DWP(32,$inp));	# i+=2
1025290207Sjkim
1026290207Sjkim	&pclmulqdq	($Xn,$Hkey,0x00);	#######
1027290207Sjkim	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
1028290207Sjkim	&pclmulqdq	($T1,$T3,0x00);		#######
1029238384Sjkim	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
1030290207Sjkim	&nop		();
1031238384Sjkim
1032238384Sjkim	&sub		($len,0x20);
1033238384Sjkim	&jbe		(&label("even_tail"));
1034290207Sjkim	&jmp		(&label("mod_loop"));
1035238384Sjkim
1036290207Sjkim&set_label("mod_loop",32);
1037290207Sjkim	&pshufd		($T2,$Xi,0b01001110);	# H^2*(Ii+Xi)
1038290207Sjkim	&movdqa		($Xhi,$Xi);
1039290207Sjkim	&pxor		($T2,$Xi);		#
1040290207Sjkim	&nop		();
1041290207Sjkim
1042290207Sjkim	&pclmulqdq	($Xi,$Hkey,0x00);	#######
1043290207Sjkim	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
1044290207Sjkim	&pclmulqdq	($T2,$T3,0x10);		#######
1045238384Sjkim	&movups		($Hkey,&QWP(0,$Htbl));	# load H
1046238384Sjkim
1047290207Sjkim	&xorps		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1048290207Sjkim	&movdqa		($T3,&QWP(0,$const));
1049290207Sjkim	&xorps		($Xhi,$Xhn);
1050290207Sjkim	 &movdqu	($Xhn,&QWP(0,$inp));	# Ii
1051290207Sjkim	&pxor		($T1,$Xi);		# aggregated Karatsuba post-processing
1052290207Sjkim	 &movdqu	($Xn,&QWP(16,$inp));	# Ii+1
1053290207Sjkim	&pxor		($T1,$Xhi);		#
1054238384Sjkim
1055290207Sjkim	 &pshufb	($Xhn,$T3);
1056290207Sjkim	&pxor		($T2,$T1);		#
1057238384Sjkim
1058290207Sjkim	&movdqa		($T1,$T2);		#
1059290207Sjkim	&psrldq		($T2,8);
1060290207Sjkim	&pslldq		($T1,8);		#
1061290207Sjkim	&pxor		($Xhi,$T2);
1062290207Sjkim	&pxor		($Xi,$T1);		#
1063290207Sjkim	 &pshufb	($Xn,$T3);
1064290207Sjkim	 &pxor		($Xhi,$Xhn);		# "Ii+Xi", consume early
1065238384Sjkim
1066290207Sjkim	&movdqa		($Xhn,$Xn);		#&clmul64x64_TX	($Xhn,$Xn,$Hkey); H*Ii+1
1067290207Sjkim	  &movdqa	($T2,$Xi);		#&reduction_alg9($Xhi,$Xi); 1st phase
1068290207Sjkim	  &movdqa	($T1,$Xi);
1069290207Sjkim	  &psllq	($Xi,5);
1070290207Sjkim	  &pxor		($T1,$Xi);		#
1071238384Sjkim	  &psllq	($Xi,1);
1072238384Sjkim	  &pxor		($Xi,$T1);		#
1073238384Sjkim	&pclmulqdq	($Xn,$Hkey,0x00);	#######
1074290207Sjkim	&movups		($T3,&QWP(32,$Htbl));
1075238384Sjkim	  &psllq	($Xi,57);		#
1076290207Sjkim	  &movdqa	($T1,$Xi);		#
1077238384Sjkim	  &pslldq	($Xi,8);
1078290207Sjkim	  &psrldq	($T1,8);		#
1079290207Sjkim	  &pxor		($Xi,$T2);
1080290207Sjkim	  &pxor		($Xhi,$T1);		#
1081290207Sjkim	&pshufd		($T1,$Xhn,0b01001110);
1082290207Sjkim	  &movdqa	($T2,$Xi);		# 2nd phase
1083290207Sjkim	  &psrlq	($Xi,1);
1084290207Sjkim	&pxor		($T1,$Xhn);
1085238384Sjkim	  &pxor		($Xhi,$T2);		#
1086238384Sjkim	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
1087290207Sjkim	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
1088290207Sjkim	  &pxor		($T2,$Xi);
1089238384Sjkim	  &psrlq	($Xi,5);
1090238384Sjkim	  &pxor		($Xi,$T2);		#
1091238384Sjkim	  &psrlq	($Xi,1);		#
1092290207Sjkim	  &pxor		($Xi,$Xhi)		#
1093238384Sjkim	&pclmulqdq	($T1,$T3,0x00);		#######
1094238384Sjkim
1095238384Sjkim	&lea		($inp,&DWP(32,$inp));
1096238384Sjkim	&sub		($len,0x20);
1097238384Sjkim	&ja		(&label("mod_loop"));
1098238384Sjkim
1099238384Sjkim&set_label("even_tail");
1100290207Sjkim	&pshufd		($T2,$Xi,0b01001110);	# H^2*(Ii+Xi)
1101290207Sjkim	&movdqa		($Xhi,$Xi);
1102290207Sjkim	&pxor		($T2,$Xi);		#
1103238384Sjkim
1104290207Sjkim	&pclmulqdq	($Xi,$Hkey,0x00);	#######
1105290207Sjkim	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
1106290207Sjkim	&pclmulqdq	($T2,$T3,0x10);		#######
1107290207Sjkim	&movdqa		($T3,&QWP(0,$const));
1108238384Sjkim
1109290207Sjkim	&xorps		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1110290207Sjkim	&xorps		($Xhi,$Xhn);
1111290207Sjkim	&pxor		($T1,$Xi);		# aggregated Karatsuba post-processing
1112290207Sjkim	&pxor		($T1,$Xhi);		#
1113290207Sjkim
1114290207Sjkim	&pxor		($T2,$T1);		#
1115290207Sjkim
1116290207Sjkim	&movdqa		($T1,$T2);		#
1117290207Sjkim	&psrldq		($T2,8);
1118290207Sjkim	&pslldq		($T1,8);		#
1119290207Sjkim	&pxor		($Xhi,$T2);
1120290207Sjkim	&pxor		($Xi,$T1);		#
1121290207Sjkim
1122238384Sjkim	&reduction_alg9	($Xhi,$Xi);
1123238384Sjkim
1124238384Sjkim	&test		($len,$len);
1125238384Sjkim	&jnz		(&label("done"));
1126238384Sjkim
1127238384Sjkim	&movups		($Hkey,&QWP(0,$Htbl));	# load H
1128238384Sjkim&set_label("odd_tail");
1129238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1130238384Sjkim	&pshufb		($T1,$T3);
1131238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1132238384Sjkim
1133238384Sjkim	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
1134238384Sjkim	&reduction_alg9	($Xhi,$Xi);
1135238384Sjkim
1136238384Sjkim&set_label("done");
1137238384Sjkim	&pshufb		($Xi,$T3);
1138238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
1139238384Sjkim&function_end("gcm_ghash_clmul");
1140238384Sjkim
1141238384Sjkim} else {		# Algorith 5. Kept for reference purposes.
1142238384Sjkim
1143238384Sjkimsub reduction_alg5 {	# 19/16 times faster than Intel version
1144238384Sjkimmy ($Xhi,$Xi)=@_;
1145238384Sjkim
1146238384Sjkim	# <<1
1147238384Sjkim	&movdqa		($T1,$Xi);		#
1148238384Sjkim	&movdqa		($T2,$Xhi);
1149238384Sjkim	&pslld		($Xi,1);
1150238384Sjkim	&pslld		($Xhi,1);		#
1151238384Sjkim	&psrld		($T1,31);
1152238384Sjkim	&psrld		($T2,31);		#
1153238384Sjkim	&movdqa		($T3,$T1);
1154238384Sjkim	&pslldq		($T1,4);
1155238384Sjkim	&psrldq		($T3,12);		#
1156238384Sjkim	&pslldq		($T2,4);
1157238384Sjkim	&por		($Xhi,$T3);		#
1158238384Sjkim	&por		($Xi,$T1);
1159238384Sjkim	&por		($Xhi,$T2);		#
1160238384Sjkim
1161238384Sjkim	# 1st phase
1162238384Sjkim	&movdqa		($T1,$Xi);
1163238384Sjkim	&movdqa		($T2,$Xi);
1164238384Sjkim	&movdqa		($T3,$Xi);		#
1165238384Sjkim	&pslld		($T1,31);
1166238384Sjkim	&pslld		($T2,30);
1167238384Sjkim	&pslld		($Xi,25);		#
1168238384Sjkim	&pxor		($T1,$T2);
1169238384Sjkim	&pxor		($T1,$Xi);		#
1170238384Sjkim	&movdqa		($T2,$T1);		#
1171238384Sjkim	&pslldq		($T1,12);
1172238384Sjkim	&psrldq		($T2,4);		#
1173238384Sjkim	&pxor		($T3,$T1);
1174238384Sjkim
1175238384Sjkim	# 2nd phase
1176238384Sjkim	&pxor		($Xhi,$T3);		#
1177238384Sjkim	&movdqa		($Xi,$T3);
1178238384Sjkim	&movdqa		($T1,$T3);
1179238384Sjkim	&psrld		($Xi,1);		#
1180238384Sjkim	&psrld		($T1,2);
1181238384Sjkim	&psrld		($T3,7);		#
1182238384Sjkim	&pxor		($Xi,$T1);
1183238384Sjkim	&pxor		($Xhi,$T2);
1184238384Sjkim	&pxor		($Xi,$T3);		#
1185238384Sjkim	&pxor		($Xi,$Xhi);		#
1186238384Sjkim}
1187238384Sjkim
1188238384Sjkim&function_begin_B("gcm_init_clmul");
1189238384Sjkim	&mov		($Htbl,&wparam(0));
1190238384Sjkim	&mov		($Xip,&wparam(1));
1191238384Sjkim
1192238384Sjkim	&call		(&label("pic"));
1193238384Sjkim&set_label("pic");
1194238384Sjkim	&blindpop	($const);
1195238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1196238384Sjkim
1197238384Sjkim	&movdqu		($Hkey,&QWP(0,$Xip));
1198238384Sjkim	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
1199238384Sjkim
1200238384Sjkim	# calculate H^2
1201238384Sjkim	&movdqa		($Xi,$Hkey);
1202238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
1203238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1204238384Sjkim
1205238384Sjkim	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
1206238384Sjkim	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
1207238384Sjkim
1208238384Sjkim	&ret		();
1209238384Sjkim&function_end_B("gcm_init_clmul");
1210238384Sjkim
1211238384Sjkim&function_begin_B("gcm_gmult_clmul");
1212238384Sjkim	&mov		($Xip,&wparam(0));
1213238384Sjkim	&mov		($Htbl,&wparam(1));
1214238384Sjkim
1215238384Sjkim	&call		(&label("pic"));
1216238384Sjkim&set_label("pic");
1217238384Sjkim	&blindpop	($const);
1218238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1219238384Sjkim
1220238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
1221238384Sjkim	&movdqa		($Xn,&QWP(0,$const));
1222238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));
1223238384Sjkim	&pshufb		($Xi,$Xn);
1224238384Sjkim
1225238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
1226238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1227238384Sjkim
1228238384Sjkim	&pshufb		($Xi,$Xn);
1229238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
1230238384Sjkim
1231238384Sjkim	&ret	();
1232238384Sjkim&function_end_B("gcm_gmult_clmul");
1233238384Sjkim
1234238384Sjkim&function_begin("gcm_ghash_clmul");
1235238384Sjkim	&mov		($Xip,&wparam(0));
1236238384Sjkim	&mov		($Htbl,&wparam(1));
1237238384Sjkim	&mov		($inp,&wparam(2));
1238238384Sjkim	&mov		($len,&wparam(3));
1239238384Sjkim
1240238384Sjkim	&call		(&label("pic"));
1241238384Sjkim&set_label("pic");
1242238384Sjkim	&blindpop	($const);
1243238384Sjkim	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1244238384Sjkim
1245238384Sjkim	&movdqu		($Xi,&QWP(0,$Xip));
1246238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1247238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));
1248238384Sjkim	&pshufb		($Xi,$T3);
1249238384Sjkim
1250238384Sjkim	&sub		($len,0x10);
1251238384Sjkim	&jz		(&label("odd_tail"));
1252238384Sjkim
1253238384Sjkim	#######
1254238384Sjkim	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1255238384Sjkim	#	[(H*Ii+1) + (H*Xi+1)] mod P =
1256238384Sjkim	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
1257238384Sjkim	#
1258238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1259238384Sjkim	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1260238384Sjkim	&pshufb		($T1,$T3);
1261238384Sjkim	&pshufb		($Xn,$T3);
1262238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1263238384Sjkim
1264238384Sjkim	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1265238384Sjkim	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1266238384Sjkim
1267238384Sjkim	&sub		($len,0x20);
1268238384Sjkim	&lea		($inp,&DWP(32,$inp));	# i+=2
1269238384Sjkim	&jbe		(&label("even_tail"));
1270238384Sjkim
1271238384Sjkim&set_label("mod_loop");
1272238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1273238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1274238384Sjkim
1275238384Sjkim	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1276238384Sjkim	&pxor		($Xhi,$Xhn);
1277238384Sjkim
1278238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1279238384Sjkim
1280238384Sjkim	#######
1281238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1282238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1283238384Sjkim	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1284238384Sjkim	&pshufb		($T1,$T3);
1285238384Sjkim	&pshufb		($Xn,$T3);
1286238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1287238384Sjkim
1288238384Sjkim	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1289238384Sjkim	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1290238384Sjkim
1291238384Sjkim	&sub		($len,0x20);
1292238384Sjkim	&lea		($inp,&DWP(32,$inp));
1293238384Sjkim	&ja		(&label("mod_loop"));
1294238384Sjkim
1295238384Sjkim&set_label("even_tail");
1296238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1297238384Sjkim
1298238384Sjkim	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1299238384Sjkim	&pxor		($Xhi,$Xhn);
1300238384Sjkim
1301238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1302238384Sjkim
1303238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1304238384Sjkim	&test		($len,$len);
1305238384Sjkim	&jnz		(&label("done"));
1306238384Sjkim
1307238384Sjkim	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1308238384Sjkim&set_label("odd_tail");
1309238384Sjkim	&movdqu		($T1,&QWP(0,$inp));	# Ii
1310238384Sjkim	&pshufb		($T1,$T3);
1311238384Sjkim	&pxor		($Xi,$T1);		# Ii+Xi
1312238384Sjkim
1313238384Sjkim	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
1314238384Sjkim	&reduction_alg5	($Xhi,$Xi);
1315238384Sjkim
1316238384Sjkim	&movdqa		($T3,&QWP(0,$const));
1317238384Sjkim&set_label("done");
1318238384Sjkim	&pshufb		($Xi,$T3);
1319238384Sjkim	&movdqu		(&QWP(0,$Xip),$Xi);
1320238384Sjkim&function_end("gcm_ghash_clmul");
1321238384Sjkim
1322238384Sjkim}
1323238384Sjkim
1324238384Sjkim&set_label("bswap",64);
1325238384Sjkim	&data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
1326238384Sjkim	&data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2);	# 0x1c2_polynomial
1327238384Sjkim&set_label("rem_8bit",64);
1328238384Sjkim	&data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E);
1329238384Sjkim	&data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E);
1330238384Sjkim	&data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E);
1331238384Sjkim	&data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E);
1332238384Sjkim	&data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E);
1333238384Sjkim	&data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E);
1334238384Sjkim	&data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E);
1335238384Sjkim	&data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E);
1336238384Sjkim	&data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE);
1337238384Sjkim	&data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE);
1338238384Sjkim	&data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE);
1339238384Sjkim	&data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE);
1340238384Sjkim	&data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E);
1341238384Sjkim	&data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E);
1342238384Sjkim	&data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE);
1343238384Sjkim	&data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE);
1344238384Sjkim	&data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E);
1345238384Sjkim	&data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E);
1346238384Sjkim	&data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E);
1347238384Sjkim	&data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E);
1348238384Sjkim	&data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E);
1349238384Sjkim	&data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E);
1350238384Sjkim	&data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E);
1351238384Sjkim	&data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E);
1352238384Sjkim	&data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE);
1353238384Sjkim	&data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE);
1354238384Sjkim	&data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE);
1355238384Sjkim	&data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE);
1356238384Sjkim	&data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E);
1357238384Sjkim	&data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E);
1358238384Sjkim	&data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE);
1359238384Sjkim	&data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE);
1360290207Sjkim}}	# $sse2
1361290207Sjkim
1362290207Sjkim&set_label("rem_4bit",64);
1363290207Sjkim	&data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S);
1364290207Sjkim	&data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S);
1365290207Sjkim	&data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S);
1366290207Sjkim	&data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S);
1367238384Sjkim}}}	# !$x86only
1368238384Sjkim
1369238384Sjkim&asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
1370238384Sjkim&asm_finish();
1371238384Sjkim
1372238384Sjkim# A question was risen about choice of vanilla MMX. Or rather why wasn't
1373238384Sjkim# SSE2 chosen instead? In addition to the fact that MMX runs on legacy
1374238384Sjkim# CPUs such as PIII, "4-bit" MMX version was observed to provide better
1375238384Sjkim# performance than *corresponding* SSE2 one even on contemporary CPUs.
1376238384Sjkim# SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
1377238384Sjkim# implementation featuring full range of lookup-table sizes, but with
1378238384Sjkim# per-invocation lookup table setup. Latter means that table size is
1379238384Sjkim# chosen depending on how much data is to be hashed in every given call,
1380238384Sjkim# more data - larger table. Best reported result for Core2 is ~4 cycles
1381238384Sjkim# per processed byte out of 64KB block. This number accounts even for
1382238384Sjkim# 64KB table setup overhead. As discussed in gcm128.c we choose to be
1383238384Sjkim# more conservative in respect to lookup table sizes, but how do the
1384238384Sjkim# results compare? Minimalistic "256B" MMX version delivers ~11 cycles
1385238384Sjkim# on same platform. As also discussed in gcm128.c, next in line "8-bit
1386238384Sjkim# Shoup's" or "4KB" method should deliver twice the performance of
1387238384Sjkim# "256B" one, in other words not worse than ~6 cycles per byte. It
1388238384Sjkim# should be also be noted that in SSE2 case improvement can be "super-
1389238384Sjkim# linear," i.e. more than twice, mostly because >>8 maps to single
1390238384Sjkim# instruction on SSE2 register. This is unlike "4-bit" case when >>4
1391238384Sjkim# maps to same amount of instructions in both MMX and SSE2 cases.
1392238384Sjkim# Bottom line is that switch to SSE2 is considered to be justifiable
1393238384Sjkim# only in case we choose to implement "8-bit" method...
1394