1<!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN" 2 "http://www.w3.org/TR/html4/loose.dtd"> 3 4<html> 5 6<head> 7 8<title>Postfix Bottleneck Analysis</title> 9 10<meta http-equiv="Content-Type" content="text/html; charset=us-ascii"> 11 12</head> 13 14<body> 15 16<h1><img src="postfix-logo.jpg" width="203" height="98" ALT="">Postfix Bottleneck Analysis</h1> 17 18<hr> 19 20<h2>Purpose of this document </h2> 21 22<p> This document is an introduction to Postfix queue congestion analysis. 23It explains how the qshape(1) program can help to track down the 24reason for queue congestion. qshape(1) is bundled with Postfix 252.1 and later source code, under the "auxiliary" directory. This 26document describes qshape(1) as bundled with Postfix 2.4. </p> 27 28<p> This document covers the following topics: </p> 29 30<ul> 31 32<li><a href="#qshape">Introducing the qshape tool</a> 33 34<li><a href="#trouble_shooting">Trouble shooting with qshape</a> 35 36<li><a href="#healthy">Example 1: Healthy queue</a> 37 38<li><a href="#dictionary_bounce">Example 2: Deferred queue full of 39dictionary attack bounces</a></li> 40 41<li><a href="#active_congestion">Example 3: Congestion in the active 42queue</a></li> 43 44<li><a href="#backlog">Example 4: High volume destination backlog</a> 45 46<li><a href="#queues">Postfix queue directories</a> 47 48<ul> 49 50<li> <a href="#maildrop_queue"> The "maildrop" queue </a> 51 52<li> <a href="#hold_queue"> The "hold" queue </a> 53 54<li> <a href="#incoming_queue"> The "incoming" queue </a> 55 56<li> <a href="#active_queue"> The "active" queue </a> 57 58<li> <a href="#deferred_queue"> The "deferred" queue </a> 59 60</ul> 61 62<li><a href="#credits">Credits</a> 63 64</ul> 65 66<h2><a name="qshape">Introducing the qshape tool</a></h2> 67 68<p> When mail is draining slowly or the queue is unexpectedly large, 69run qshape(1) as the super-user (root) to help zero in on the problem. 70The qshape(1) program displays a tabular view of the Postfix queue 71contents. </p> 72 73<ul> 74 75<li> <p> On the horizontal axis, it displays the queue age with 76fine granularity for recent messages and (geometrically) less fine 77granularity for older messages. </p> 78 79<li> <p> The vertical axis displays the destination (or with the 80"-s" switch the sender) domain. Domains with the most messages are 81listed first. </p> 82 83</ul> 84 85<p> For example, in the output below we see the top 10 lines of 86the (mostly forged) sender domain distribution for captured spam 87in the "hold" queue: </p> 88 89<blockquote> 90<pre> 91$ qshape -s hold | head 92 T 5 10 20 40 80 160 320 640 1280 1280+ 93 TOTAL 486 0 0 1 0 0 2 4 20 40 419 94 yahoo.com 14 0 0 1 0 0 0 0 1 0 12 95 extremepricecuts.net 13 0 0 0 0 0 0 0 2 0 11 96 ms35.hinet.net 12 0 0 0 0 0 0 0 0 1 11 97 winnersdaily.net 12 0 0 0 0 0 0 0 2 0 10 98 hotmail.com 11 0 0 0 0 0 0 0 0 1 10 99 worldnet.fr 6 0 0 0 0 0 0 0 0 0 6 100 ms41.hinet.net 6 0 0 0 0 0 0 0 0 0 6 101 osn.de 5 0 0 0 0 0 1 0 0 0 4 102</pre> 103</blockquote> 104 105<ul> 106 107<li> <p> The "T" column shows the total (in this case sender) count 108for each domain. The columns with numbers above them, show counts 109for messages aged fewer than that many minutes, but not younger 110than the age limit for the previous column. The row labeled "TOTAL" 111shows the total count for all domains. </p> 112 113<li> <p> In this example, there are 14 messages allegedly from 114yahoo.com, 1 between 10 and 20 minutes old, 1 between 320 and 640 115minutes old and 12 older than 1280 minutes (1440 minutes in a day). 116</p> 117 118</ul> 119 120<p> When the output is a terminal intermediate results showing the top 20 121domains (-n option) are displayed after every 1000 messages (-N option) 122and the final output also shows only the top 20 domains. This makes 123qshape useful even when the deferred queue is very large and it may 124otherwise take prohibitively long to read the entire deferred queue. </p> 125 126<p> By default, qshape shows statistics for the union of both the 127incoming and active queues which are the most relevant queues to 128look at when analyzing performance. </p> 129 130<p> One can request an alternate list of queues: </p> 131 132<blockquote> 133<pre> 134$ qshape deferred 135$ qshape incoming active deferred 136</pre> 137</blockquote> 138 139<p> this will show the age distribution of the deferred queue or 140the union of the incoming active and deferred queues. </p> 141 142<p> Command line options control the number of display "buckets", 143the age limit for the smallest bucket, display of parent domain 144counts and so on. The "-h" option outputs a summary of the available 145switches. </p> 146 147<h2><a name="trouble_shooting">Trouble shooting with qshape</a> 148</h2> 149 150<p> Large numbers in the qshape output represent a large number of 151messages that are destined to (or alleged to come from) a particular 152domain. It should be possible to tell at a glance which domains 153dominate the queue sender or recipient counts, approximately when 154a burst of mail started, and when it stopped. </p> 155 156<p> The problem destinations or sender domains appear near the top 157left corner of the output table. Remember that the active queue 158can accommodate up to 20000 ($qmgr_message_active_limit) messages. 159To check whether this limit has been reached, use: </p> 160 161<blockquote> 162<pre> 163$ qshape -s active <i>(show sender statistics)</i> 164</pre> 165</blockquote> 166 167<p> If the total sender count is below 20000 the active queue is 168not yet saturated, any high volume sender domains show near the 169top of the output. 170 171<p> With oqmgr(8) the active queue is also limited to at most 20000 172recipient addresses ($qmgr_message_recipient_limit). To check for 173exhaustion of this limit use: </p> 174 175<blockquote> 176<pre> 177$ qshape active <i>(show recipient statistics)</i> 178</pre> 179</blockquote> 180 181<p> Having found the high volume domains, it is often useful to 182search the logs for recent messages pertaining to the domains in 183question. </p> 184 185<blockquote> 186<pre> 187# Find deliveries to example.com 188# 189$ tail -10000 /var/log/maillog | 190 egrep -i ': to=<.*@example\.com>,' | 191 less 192 193# Find messages from example.com 194# 195$ tail -10000 /var/log/maillog | 196 egrep -i ': from=<.*@example\.com>,' | 197 less 198</pre> 199</blockquote> 200 201<p> You may want to drill in on some specific queue ids: </p> 202 203<blockquote> 204<pre> 205# Find all messages for a specific queue id. 206# 207$ tail -10000 /var/log/maillog | egrep ': 2B2173FF68: ' 208</pre> 209</blockquote> 210 211<p> Also look for queue manager warning messages in the log. These 212warnings can suggest strategies to reduce congestion. </p> 213 214<blockquote> 215<pre> 216$ egrep 'qmgr.*(panic|fatal|error|warning):' /var/log/maillog 217</pre> 218</blockquote> 219 220<p> When all else fails try the Postfix mailing list for help, but 221please don't forget to include the top 10 or 20 lines of qshape(1) 222output. </p> 223 224<h2><a name="healthy">Example 1: Healthy queue</a></h2> 225 226<p> When looking at just the incoming and active queues, under 227normal conditions (no congestion) the incoming and active queues 228are nearly empty. Mail leaves the system almost as quickly as it 229comes in or is deferred without congestion in the active queue. 230</p> 231 232<blockquote> 233<pre> 234$ qshape <i>(show incoming and active queue status)</i> 235 236 T 5 10 20 40 80 160 320 640 1280 1280+ 237 TOTAL 5 0 0 0 1 0 0 0 1 1 2 238 meri.uwasa.fi 5 0 0 0 1 0 0 0 1 1 2 239</pre> 240</blockquote> 241 242<p> If one looks at the two queues separately, the incoming queue 243is empty or perhaps briefly has one or two messages, while the 244active queue holds more messages and for a somewhat longer time: 245</p> 246 247<blockquote> 248<pre> 249$ qshape incoming 250 251 T 5 10 20 40 80 160 320 640 1280 1280+ 252 TOTAL 0 0 0 0 0 0 0 0 0 0 0 253 254$ qshape active 255 256 T 5 10 20 40 80 160 320 640 1280 1280+ 257 TOTAL 5 0 0 0 1 0 0 0 1 1 2 258 meri.uwasa.fi 5 0 0 0 1 0 0 0 1 1 2 259</pre> 260</blockquote> 261 262<h2><a name="dictionary_bounce">Example 2: Deferred queue full of 263dictionary attack bounces</a></h2> 264 265<p> This is from a server where recipient validation is not yet 266available for some of the hosted domains. Dictionary attacks on 267the unvalidated domains result in bounce backscatter. The bounces 268dominate the queue, but with proper tuning they do not saturate the 269incoming or active queues. The high volume of deferred mail is not 270a direct cause for alarm. </p> 271 272<blockquote> 273<pre> 274$ qshape deferred | head 275 276 T 5 10 20 40 80 160 320 640 1280 1280+ 277 TOTAL 2234 4 2 5 9 31 57 108 201 464 1353 278 heyhihellothere.com 207 0 0 1 1 6 6 8 25 68 92 279 pleazerzoneprod.com 105 0 0 0 0 0 0 0 5 44 56 280 groups.msn.com 63 2 1 2 4 4 14 14 14 8 0 281 orion.toppoint.de 49 0 0 0 1 0 2 4 3 16 23 282 kali.com.cn 46 0 0 0 0 1 0 2 6 12 25 283 meri.uwasa.fi 44 0 0 0 0 1 0 2 8 11 22 284 gjr.paknet.com.pk 43 1 0 0 1 1 3 3 6 12 16 285 aristotle.algonet.se 41 0 0 0 0 0 1 2 11 12 15 286</pre> 287</blockquote> 288 289<p> The domains shown are mostly bulk-mailers and all the volume 290is the tail end of the time distribution, showing that short term 291arrival rates are moderate. Larger numbers and lower message ages 292are more indicative of current trouble. Old mail still going nowhere 293is largely harmless so long as the active and incoming queues are 294short. We can also see that the groups.msn.com undeliverables are 295low rate steady stream rather than a concentrated dictionary attack 296that is now over. </p> 297 298<blockquote> 299<pre> 300$ qshape -s deferred | head 301 302 T 5 10 20 40 80 160 320 640 1280 1280+ 303 TOTAL 2193 4 4 5 8 33 56 104 205 465 1309 304 MAILER-DAEMON 1709 4 4 5 8 33 55 101 198 452 849 305 example.com 263 0 0 0 0 0 0 0 0 2 261 306 example.org 209 0 0 0 0 0 1 3 6 11 188 307 example.net 6 0 0 0 0 0 0 0 0 0 6 308 example.edu 3 0 0 0 0 0 0 0 0 0 3 309 example.gov 2 0 0 0 0 0 0 0 1 0 1 310 example.mil 1 0 0 0 0 0 0 0 0 0 1 311</pre> 312</blockquote> 313 314<p> Looking at the sender distribution, we see that as expected 315most of the messages are bounces. </p> 316 317<h2><a name="active_congestion">Example 3: Congestion in the active 318queue</a></h2> 319 320<p> This example is taken from a Feb 2004 discussion on the Postfix 321Users list. Congestion was reported with the active and incoming 322queues large and not shrinking despite very large delivery agent 323process limits. The thread is archived at: 324http://groups.google.com/groups?threadm=c0b7js$2r65$1@FreeBSD.csie.NCTU.edu.tw 325and 326http://archives.neohapsis.com/archives/postfix/2004-02/thread.html#1371 327</p> 328 329<p> Using an older version of qshape(1) it was quickly determined 330that all the messages were for just a few destinations: </p> 331 332<blockquote> 333<pre> 334$ qshape <i>(show incoming and active queue status)</i> 335 336 T A 5 10 20 40 80 160 320 320+ 337 TOTAL 11775 9996 0 0 1 1 42 94 221 1420 338 user.sourceforge.net 7678 7678 0 0 0 0 0 0 0 0 339 lists.sourceforge.net 2313 2313 0 0 0 0 0 0 0 0 340 gzd.gotdns.com 102 0 0 0 0 0 0 0 2 100 341</pre> 342</blockquote> 343 344<p> The "A" column showed the count of messages in the active queue, 345and the numbered columns showed totals for the deferred queue. At 34610000 messages (Postfix 1.x active queue size limit) the active 347queue is full. The incoming was growing rapidly. </p> 348 349<p> With the trouble destinations clearly identified, the administrator 350quickly found and fixed the problem. It is substantially harder to 351glean the same information from the logs. While a careful reading 352of mailq(1) output should yield similar results, it is much harder 353to gauge the magnitude of the problem by looking at the queue 354one message at a time. </p> 355 356<h2><a name="backlog">Example 4: High volume destination backlog</a></h2> 357 358<p> When a site you send a lot of email to is down or slow, mail 359messages will rapidly build up in the deferred queue, or worse, in 360the active queue. The qshape output will show large numbers for 361the destination domain in all age buckets that overlap the starting 362time of the problem: </p> 363 364<blockquote> 365<pre> 366$ qshape deferred | head 367 368 T 5 10 20 40 80 160 320 640 1280 1280+ 369 TOTAL 5000 200 200 400 800 1600 1000 200 200 200 200 370 highvolume.com 4000 160 160 320 640 1280 1440 0 0 0 0 371 ... 372</pre> 373</blockquote> 374 375<p> Here the "highvolume.com" destination is continuing to accumulate 376deferred mail. The incoming and active queues are fine, but the 377deferred queue started growing some time between 1 and 2 hours ago 378and continues to grow. </p> 379 380<p> If the high volume destination is not down, but is instead 381slow, one might see similar congestion in the active queue. Active 382queue congestion is a greater cause for alarm; one might need to 383take measures to ensure that the mail is deferred instead or even 384add an access(5) rule asking the sender to try again later. </p> 385 386<p> If a high volume destination exhibits frequent bursts of consecutive 387connections refused by all MX hosts or "421 Server busy errors", it 388is possible for the queue manager to mark the destination as "dead" 389despite the transient nature of the errors. The destination will be 390retried again after the expiration of a $minimal_backoff_time timer. 391If the error bursts are frequent enough it may be that only a small 392quantity of email is delivered before the destination is again marked 393"dead". In some cases enabling static (not on demand) connection 394caching by listing the appropriate nexthop domain in a table included in 395"smtp_connection_cache_destinations" may help to reduce the error rate, 396because most messages will re-use existing connections. </p> 397 398<p> The MTA that has been observed most frequently to exhibit such 399bursts of errors is Microsoft Exchange, which refuses connections 400under load. Some proxy virus scanners in front of the Exchange 401server propagate the refused connection to the client as a "421" 402error. </p> 403 404<p> Note that it is now possible to configure Postfix to exhibit similarly 405erratic behavior by misconfiguring the anvil(8) service. Do not use 406anvil(8) for steady-state rate limiting, its purpose is (unintentional) 407DoS prevention and the rate limits set should be very generous! </p> 408 409<p> If one finds oneself needing to deliver a high volume of mail to a 410destination that exhibits frequent brief bursts of errors and connection 411caching does not solve the problem, there is a subtle workaround. </p> 412 413<ul> 414 415<li> <p> Postfix version 2.5 and later: </p> 416 417<ul> 418 419<li> <p> In master.cf set up a dedicated clone of the "smtp" transport 420for the destination in question. In the example below we will call 421it "fragile". </p> 422 423<li> <p> In master.cf configure a reasonable process limit for the 424cloned smtp transport (a number in the 10-20 range is typical). </p> 425 426<li> <p> IMPORTANT!!! In main.cf configure a large per-destination 427pseudo-cohort failure limit for the cloned smtp transport. </p> 428 429<pre> 430/etc/postfix/main.cf: 431 transport_maps = hash:/etc/postfix/transport 432 fragile_destination_concurrency_failed_cohort_limit = 100 433 fragile_destination_concurrency_limit = 20 434 435/etc/postfix/transport: 436 example.com fragile: 437 438/etc/postfix/master.cf: 439 # service type private unpriv chroot wakeup maxproc command 440 fragile unix - - n - 20 smtp 441</pre> 442 443<p> See also the documentation for 444default_destination_concurrency_failed_cohort_limit and 445default_destination_concurrency_limit. </p> 446 447</ul> 448 449<li> <p> Earlier Postfix versions: </p> 450 451<ul> 452 453<li> <p> In master.cf set up a dedicated clone of the "smtp" 454transport for the destination in question. In the example below 455we will call it "fragile". </p> 456 457<li> <p> In master.cf configure a reasonable process limit for the 458transport (a number in the 10-20 range is typical). </p> 459 460<li> <p> IMPORTANT!!! In main.cf configure a very large initial 461and destination concurrency limit for this transport (say 2000). </p> 462 463<pre> 464/etc/postfix/main.cf: 465 transport_maps = hash:/etc/postfix/transport 466 initial_destination_concurrency = 2000 467 fragile_destination_concurrency_limit = 2000 468 469/etc/postfix/transport: 470 example.com fragile: 471 472/etc/postfix/master.cf: 473 # service type private unpriv chroot wakeup maxproc command 474 fragile unix - - n - 20 smtp 475</pre> 476 477<p> See also the documentation for default_destination_concurrency_limit. 478</p> 479 480</ul> 481 482</ul> 483 484<p> The effect of this configuration is that up to 2000 485consecutive errors are tolerated without marking the destination 486dead, while the total concurrency remains reasonable (10-20 487processes). This trick is only for a very specialized situation: 488high volume delivery into a channel with multi-error bursts 489that is capable of high throughput, but is repeatedly throttled by 490the bursts of errors. </p> 491 492<p> When a destination is unable to handle the load even after the 493Postfix process limit is reduced to 1, a desperate measure is to 494insert brief delays between delivery attempts. </p> 495 496<ul> 497 498<li> <p> Postfix version 2.5 and later: </p> 499 500<ul> 501 502<li> <p> In master.cf set up a dedicated clone of the "smtp" transport 503for the problem destination. In the example below we call it "slow". 504</p> 505 506<li> <p> In main.cf configure a short delay between deliveries to 507the same destination. </p> 508 509<pre> 510/etc/postfix/main.cf: 511 transport_maps = hash:/etc/postfix/transport 512 slow_destination_rate_delay = 1 513 slow_destination_concurrency_failed_cohort_limit = 100 514 515/etc/postfix/transport: 516 example.com slow: 517 518/etc/postfix/master.cf: 519 # service type private unpriv chroot wakeup maxproc command 520 slow unix - - n - - smtp 521</pre> 522 523</ul> 524 525<p> See also the documentation for default_destination_rate_delay. </p> 526 527<p> This solution forces the Postfix smtp(8) client to wait for 528$slow_destination_rate_delay seconds between deliveries to the same 529destination. </p> 530 531<p> IMPORTANT!! The large slow_destination_concurrency_failed_cohort_limit 532value is needed. This prevents Postfix from deferring all mail for 533the same destination after only one connection or handshake error 534(the reason for this is that non-zero slow_destination_rate_delay 535forces a per-destination concurrency of 1). </p> 536 537<li> <p> Earlier Postfix versions: </p> 538 539<ul> 540 541<li> <p> In the transport map entry for the problem destination, 542specify a dead host as the primary nexthop. </p> 543 544<li> <p> In the master.cf entry for the transport specify the 545problem destination as the fallback_relay and specify a small 546smtp_connect_timeout value. </p> 547 548<pre> 549/etc/postfix/main.cf: 550 transport_maps = hash:/etc/postfix/transport 551 552/etc/postfix/transport: 553 example.com slow:[dead.host] 554 555/etc/postfix/master.cf: 556 # service type private unpriv chroot wakeup maxproc command 557 slow unix - - n - 1 smtp 558 -o fallback_relay=problem.example.com 559 -o smtp_connect_timeout=1 560 -o smtp_connection_cache_on_demand=no 561</pre> 562 563</ul> 564 565<p> This solution forces the Postfix smtp(8) client to wait for 566$smtp_connect_timeout seconds between deliveries. The connection 567caching feature is disabled to prevent the client from skipping 568over the dead host. </p> 569 570</ul> 571 572<h2><a name="queues">Postfix queue directories</a></h2> 573 574<p> The following sections describe Postfix queues: their purpose, 575what normal behavior looks like, and how to diagnose abnormal 576behavior. </p> 577 578<h3> <a name="maildrop_queue"> The "maildrop" queue </a> </h3> 579 580<p> Messages that have been submitted via the Postfix sendmail(1) 581command, but not yet brought into the main Postfix queue by the 582pickup(8) service, await processing in the "maildrop" queue. Messages 583can be added to the "maildrop" queue even when the Postfix system 584is not running. They will begin to be processed once Postfix is 585started. </p> 586 587<p> The "maildrop" queue is drained by the single threaded pickup(8) 588service scanning the queue directory periodically or when notified 589of new message arrival by the postdrop(1) program. The postdrop(1) 590program is a setgid helper that allows the unprivileged Postfix 591sendmail(1) program to inject mail into the "maildrop" queue and 592to notify the pickup(8) service of its arrival. </p> 593 594<p> All mail that enters the main Postfix queue does so via the 595cleanup(8) service. The cleanup service is responsible for envelope 596and header rewriting, header and body regular expression checks, 597automatic bcc recipient processing, milter content processing, and 598reliable insertion of the message into the Postfix "incoming" queue. </p> 599 600<p> In the absence of excessive CPU consumption in cleanup(8) header 601or body regular expression checks or other software consuming all 602available CPU resources, Postfix performance is disk I/O bound. 603The rate at which the pickup(8) service can inject messages into 604the queue is largely determined by disk access times, since the 605cleanup(8) service must commit the message to stable storage before 606returning success. The same is true of the postdrop(1) program 607writing the message to the "maildrop" directory. </p> 608 609<p> As the pickup service is single threaded, it can only deliver 610one message at a time at a rate that does not exceed the reciprocal 611disk I/O latency (+ CPU if not negligible) of the cleanup service. 612</p> 613 614<p> Congestion in this queue is indicative of an excessive local message 615submission rate or perhaps excessive CPU consumption in the cleanup(8) 616service due to excessive body_checks, or (Postfix ≥ 2.3) high latency 617milters. </p> 618 619<p> Note, that once the active queue is full, the cleanup service 620will attempt to slow down message injection by pausing $in_flow_delay 621for each message. In this case "maildrop" queue congestion may be 622a consequence of congestion downstream, rather than a problem in 623its own right. </p> 624 625<p> Note, you should not attempt to deliver large volumes of mail via 626the pickup(8) service. High volume sites should avoid using "simple" 627content filters that re-inject scanned mail via Postfix sendmail(1) 628and postdrop(1). </p> 629 630<p> A high arrival rate of locally submitted mail may be an indication 631of an uncaught forwarding loop, or a run-away notification program. 632Try to keep the volume of local mail injection to a moderate level. 633</p> 634 635<p> The "postsuper -r" command can place selected messages into 636the "maildrop" queue for reprocessing. This is most useful for 637resetting any stale content_filter settings. Requeuing a large number 638of messages using "postsuper -r" can clearly cause a spike in the 639size of the "maildrop" queue. </p> 640 641<h3> <a name="hold_queue"> The "hold" queue </a> </h3> 642 643<p> The administrator can define "smtpd" access(5) policies, or 644cleanup(8) header/body checks that cause messages to be automatically 645diverted from normal processing and placed indefinitely in the 646"hold" queue. Messages placed in the "hold" queue stay there until 647the administrator intervenes. No periodic delivery attempts are 648made for messages in the "hold" queue. The postsuper(1) command 649can be used to manually release messages into the "deferred" queue. 650</p> 651 652<p> Messages can potentially stay in the "hold" queue longer than 653$maximal_queue_lifetime. If such "old" messages need to be released from 654the "hold" queue, they should typically be moved into the "maildrop" 655queue using "postsuper -r", so that the message gets a new timestamp and 656is given more than one opportunity to be delivered. Messages that are 657"young" can be moved directly into the "deferred" queue using 658"postsuper -H". </p> 659 660<p> The "hold" queue plays little role in Postfix performance, and 661monitoring of the "hold" queue is typically more closely motivated 662by tracking spam and malware, than by performance issues. </p> 663 664<h3> <a name="incoming_queue"> The "incoming" queue </a> </h3> 665 666<p> All new mail entering the Postfix queue is written by the 667cleanup(8) service into the "incoming" queue. New queue files are 668created owned by the "postfix" user with an access bitmask (or 669mode) of 0600. Once a queue file is ready for further processing 670the cleanup(8) service changes the queue file mode to 0700 and 671notifies the queue manager of new mail arrival. The queue manager 672ignores incomplete queue files whose mode is 0600, as these are 673still being written by cleanup. </p> 674 675<p> The queue manager scans the incoming queue bringing any new 676mail into the "active" queue if the active queue resource limits 677have not been exceeded. By default, the active queue accommodates 678at most 20000 messages. Once the active queue message limit is 679reached, the queue manager stops scanning the incoming (and deferred, 680see below) queue. </p> 681 682<p> Under normal conditions the incoming queue is nearly empty (has 683only mode 0600 files), with the queue manager able to import new 684messages into the active queue as soon as they become available. 685</p> 686 687<p> The incoming queue grows when the message input rate spikes 688above the rate at which the queue manager can import messages into 689the active queue. The main factors slowing down the queue manager 690are disk I/O and lookup queries to the trivial-rewrite service. If the queue 691manager is routinely not keeping up, consider not using "slow" 692lookup services (MySQL, LDAP, ...) for transport lookups or speeding 693up the hosts that provide the lookup service. If the problem is I/O 694starvation, consider striping the queue over more disks, faster controllers 695with a battery write cache, or other hardware improvements. At the very 696least, make sure that the queue directory is mounted with the "noatime" 697option if applicable to the underlying filesystem. </p> 698 699<p> The in_flow_delay parameter is used to clamp the input rate 700when the queue manager starts to fall behind. The cleanup(8) service 701will pause for $in_flow_delay seconds before creating a new queue 702file if it cannot obtain a "token" from the queue manager. </p> 703 704<p> Since the number of cleanup(8) processes is limited in most 705cases by the SMTP server concurrency, the input rate can exceed 706the output rate by at most "SMTP connection count" / $in_flow_delay 707messages per second. </p> 708 709<p> With a default process limit of 100, and an in_flow_delay of 7101s, the coupling is strong enough to limit a single run-away injector 711to 1 message per second, but is not strong enough to deflect an 712excessive input rate from many sources at the same time. </p> 713 714<p> If a server is being hammered from multiple directions, consider 715raising the in_flow_delay to 10 seconds, but only if the incoming 716queue is growing even while the active queue is not full and the 717trivial-rewrite service is using a fast transport lookup mechanism. 718</p> 719 720<h3> <a name="active_queue"> The "active" queue </a> </h3> 721 722<p> The queue manager is a delivery agent scheduler; it works to 723ensure fast and fair delivery of mail to all destinations within 724designated resource limits. </p> 725 726<p> The active queue is somewhat analogous to an operating system's 727process run queue. Messages in the active queue are ready to be 728sent (runnable), but are not necessarily in the process of being 729sent (running). </p> 730 731<p> While most Postfix administrators think of the "active" queue 732as a directory on disk, the real "active" queue is a set of data 733structures in the memory of the queue manager process. </p> 734 735<p> Messages in the "maildrop", "hold", "incoming" and "deferred" 736queues (see below) do not occupy memory; they are safely stored on 737disk waiting for their turn to be processed. The envelope information 738for messages in the "active" queue is managed in memory, allowing 739the queue manager to do global scheduling, allocating available 740delivery agent processes to an appropriate message in the active 741queue. </p> 742 743<p> Within the active queue, (multi-recipient) messages are broken 744up into groups of recipients that share the same transport/nexthop 745combination; the group size is capped by the transport's recipient 746concurrency limit. </p> 747 748<p> Multiple recipient groups (from one or more messages) are queued 749for delivery grouped by transport/nexthop combination. The 750<b>destination</b> concurrency limit for the transports caps the number 751of simultaneous delivery attempts for each nexthop. Transports with 752a <b>recipient</b> concurrency limit of 1 are special: these are grouped 753by the actual recipient address rather than the nexthop, yielding 754per-recipient concurrency limits rather than per-domain 755concurrency limits. Per-recipient limits are appropriate when 756performing final delivery to mailboxes rather than when relaying 757to a remote server. </p> 758 759<p> Congestion occurs in the active queue when one or more destinations 760drain slower than the corresponding message input rate. </p> 761 762<p> Input into the active queue comes both from new mail in the "incoming" 763queue, and retries of mail in the "deferred" queue. Should the "deferred" 764queue get really large, retries of old mail can dominate the arrival 765rate of new mail. Systems with more CPU, faster disks and more network 766bandwidth can deal with larger deferred queues, but as a rule of thumb 767the deferred queue scales to somewhere between 100,000 and 1,000,000 768messages with good performance unlikely above that "limit". Systems with 769queues this large should typically stop accepting new mail, or put the 770backlog "on hold" until the underlying issue is fixed (provided that 771there is enough capacity to handle just the new mail). </p> 772 773<p> When a destination is down for some time, the queue manager will 774mark it dead, and immediately defer all mail for the destination without 775trying to assign it to a delivery agent. In this case the messages 776will quickly leave the active queue and end up in the deferred queue 777(with Postfix < 2.4, this is done directly by the queue manager, 778with Postfix ≥ 2.4 this is done via the "retry" delivery agent). </p> 779 780<p> When the destination is instead simply slow, or there is a problem 781causing an excessive arrival rate the active queue will grow and will 782become dominated by mail to the congested destination. </p> 783 784<p> The only way to reduce congestion is to either reduce the input 785rate or increase the throughput. Increasing the throughput requires 786either increasing the concurrency or reducing the latency of 787deliveries. </p> 788 789<p> For high volume sites a key tuning parameter is the number of 790"smtp" delivery agents allocated to the "smtp" and "relay" transports. 791High volume sites tend to send to many different destinations, many 792of which may be down or slow, so a good fraction of the available 793delivery agents will be blocked waiting for slow sites. Also mail 794destined across the globe will incur large SMTP command-response 795latencies, so high message throughput can only be achieved with 796more concurrent delivery agents. </p> 797 798<p> The default "smtp" process limit of 100 is good enough for most 799sites, and may even need to be lowered for sites with low bandwidth 800connections (no use increasing concurrency once the network pipe 801is full). When one finds that the queue is growing on an "idle" 802system (CPU, disk I/O and network not exhausted) the remaining 803reason for congestion is insufficient concurrency in the face of 804a high average latency. If the number of outbound SMTP connections 805(either ESTABLISHED or SYN_SENT) reaches the process limit, mail 806is draining slowly and the system and network are not loaded, raise 807the "smtp" and/or "relay" process limits! </p> 808 809<p> When a high volume destination is served by multiple MX hosts with 810typically low delivery latency, performance can suffer dramatically when 811one of the MX hosts is unresponsive and SMTP connections to that host 812timeout. For example, if there are 2 equal weight MX hosts, the SMTP 813connection timeout is 30 seconds and one of the MX hosts is down, the 814average SMTP connection will take approximately 15 seconds to complete. 815With a default per-destination concurrency limit of 20 connections, 816throughput falls to just over 1 message per second. </p> 817 818<p> The best way to avoid bottlenecks when one or more MX hosts is 819non-responsive is to use connection caching. Connection caching was 820introduced with Postfix 2.2 and is by default enabled on demand for 821destinations with a backlog of mail in the active queue. When connection 822caching is in effect for a particular destination, established connections 823are re-used to send additional messages, this reduces the number of 824connections made per message delivery and maintains good throughput even 825in the face of partial unavailability of the destination's MX hosts. </p> 826 827<p> If connection caching is not available (Postfix < 2.2) or does 828not provide a sufficient latency reduction, especially for the "relay" 829transport used to forward mail to "your own" domains, consider setting 830lower than default SMTP connection timeouts (1-5 seconds) and higher 831than default destination concurrency limits. This will further reduce 832latency and provide more concurrency to maintain throughput should 833latency rise. </p> 834 835<p> Setting high concurrency limits to domains that are not your own may 836be viewed as hostile by the receiving system, and steps may be taken 837to prevent you from monopolizing the destination system's resources. 838The defensive measures may substantially reduce your throughput or block 839access entirely. Do not set aggressive concurrency limits to remote 840domains without coordinating with the administrators of the target 841domain. </p> 842 843<p> If necessary, dedicate and tune custom transports for selected high 844volume destinations. The "relay" transport is provided for forwarding mail 845to domains for which your server is a primary or backup MX host. These can 846make up a substantial fraction of your email traffic. Use the "relay" and 847not the "smtp" transport to send email to these domains. Using the "relay" 848transport allocates a separate delivery agent pool to these destinations 849and allows separate tuning of timeouts and concurrency limits. </p> 850 851<p> Another common cause of congestion is unwarranted flushing of the 852entire deferred queue. The deferred queue holds messages that are likely 853to fail to be delivered and are also likely to be slow to fail delivery 854(time out). As a result the most common reaction to a large deferred queue 855(flush it!) is more than likely counter-productive, and typically makes 856the congestion worse. Do not flush the deferred queue unless you expect 857that most of its content has recently become deliverable (e.g. relayhost 858back up after an outage)! </p> 859 860<p> Note that whenever the queue manager is restarted, there may 861already be messages in the active queue directory, but the "real" 862active queue in memory is empty. In order to recover the in-memory 863state, the queue manager moves all the active queue messages 864back into the incoming queue, and then uses its normal incoming 865queue scan to refill the active queue. The process of moving all 866the messages back and forth, redoing transport table (trivial-rewrite(8) 867resolve service) lookups, and re-importing the messages back into 868memory is expensive. At all costs, avoid frequent restarts of the 869queue manager (e.g. via frequent execution of "postfix reload"). </p> 870 871<h3> <a name="deferred_queue"> The "deferred" queue </a> </h3> 872 873<p> When all the deliverable recipients for a message are delivered, 874and for some recipients delivery failed for a transient reason (it 875might succeed later), the message is placed in the deferred queue. 876</p> 877 878<p> The queue manager scans the deferred queue periodically. The scan 879interval is controlled by the queue_run_delay parameter. While a deferred 880queue scan is in progress, if an incoming queue scan is also in progress 881(ideally these are brief since the incoming queue should be short), the 882queue manager alternates between looking for messages in the "incoming" 883queue and in the "deferred" queue. This "round-robin" strategy prevents 884starvation of either the incoming or the deferred queues. </p> 885 886<p> Each deferred queue scan only brings a fraction of the deferred 887queue back into the active queue for a retry. This is because each 888message in the deferred queue is assigned a "cool-off" time when 889it is deferred. This is done by time-warping the modification 890time of the queue file into the future. The queue file is not 891eligible for a retry if its modification time is not yet reached. 892</p> 893 894<p> The "cool-off" time is at least $minimal_backoff_time and at 895most $maximal_backoff_time. The next retry time is set by doubling 896the message's age in the queue, and adjusting up or down to lie 897within the limits. This means that young messages are initially 898retried more often than old messages. </p> 899 900<p> If a high volume site routinely has large deferred queues, it 901may be useful to adjust the queue_run_delay, minimal_backoff_time and 902maximal_backoff_time to provide short enough delays on first failure 903(Postfix ≥ 2.4 has a sensibly low minimal backoff time by default), 904with perhaps longer delays after multiple failures, to reduce the 905retransmission rate of old messages and thereby reduce the quantity 906of previously deferred mail in the active queue. If you want a really 907low minimal_backoff_time, you may also want to lower queue_run_delay, 908but understand that more frequent scans will increase the demand for 909disk I/O. </p> 910 911<p> One common cause of large deferred queues is failure to validate 912recipients at the SMTP input stage. Since spammers routinely launch 913dictionary attacks from unrepliable sender addresses, the bounces 914for invalid recipient addresses clog the deferred queue (and at high 915volumes proportionally clog the active queue). Recipient validation 916is strongly recommended through use of the local_recipient_maps and 917relay_recipient_maps parameters. Even when bounces drain quickly they 918inundate innocent victims of forgery with unwanted email. To avoid 919this, do not accept mail for invalid recipients. </p> 920 921<p> When a host with lots of deferred mail is down for some time, 922it is possible for the entire deferred queue to reach its retry 923time simultaneously. This can lead to a very full active queue once 924the host comes back up. The phenomenon can repeat approximately 925every maximal_backoff_time seconds if the messages are again deferred 926after a brief burst of congestion. Perhaps, a future Postfix release 927will add a random offset to the retry time (or use a combination 928of strategies) to reduce the odds of repeated complete deferred 929queue flushes. </p> 930 931<h2><a name="credits">Credits</a></h2> 932 933<p> The qshape(1) program was developed by Victor Duchovni of Morgan 934Stanley, who also wrote the initial version of this document. </p> 935 936</body> 937 938</html> 939