Deleted Added
full compact
ChangeLog (286866) ChangeLog (288090)
1Following are change highlights associated with official releases. Important
2bug fixes are all mentioned, but some internal enhancements are omitted here for
3brevity. Much more detail can be found in the git revision history:
4
5 https://github.com/jemalloc/jemalloc
6
1Following are change highlights associated with official releases. Important
2bug fixes are all mentioned, but some internal enhancements are omitted here for
3brevity. Much more detail can be found in the git revision history:
4
5 https://github.com/jemalloc/jemalloc
6
7* 4.0.2 (September 21, 2015)
8
9 This bugfix release addresses a few bugs specific to heap profiling.
10
11 Bug fixes:
12 - Fix ixallocx_prof_sample() to never modify nor create sampled small
13 allocations. xallocx() is in general incapable of moving small allocations,
14 so this fix removes buggy code without loss of generality.
15 - Fix irallocx_prof_sample() to always allocate large regions, even when
16 alignment is non-zero.
17 - Fix prof_alloc_rollback() to read tdata from thread-specific data rather
18 than dereferencing a potentially invalid tctx.
19
20* 4.0.1 (September 15, 2015)
21
22 This is a bugfix release that is somewhat high risk due to the amount of
23 refactoring required to address deep xallocx() problems. As a side effect of
24 these fixes, xallocx() now tries harder to partially fulfill requests for
25 optional extra space. Note that a couple of minor heap profiling
26 optimizations are included, but these are better thought of as performance
27 fixes that were integral to disovering most of the other bugs.
28
29 Optimizations:
30 - Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the
31 fast path when heap profiling is enabled. Additionally, split a special
32 case out into arena_prof_tctx_reset(), which also avoids chunk metadata
33 reads.
34 - Optimize irallocx_prof() to optimistically update the sampler state. The
35 prior implementation appears to have been a holdover from when
36 rallocx()/xallocx() functionality was combined as rallocm().
37
38 Bug fixes:
39 - Fix TLS configuration such that it is enabled by default for platforms on
40 which it works correctly.
41 - Fix arenas_cache_cleanup() and arena_get_hard() to handle
42 allocation/deallocation within the application's thread-specific data
43 cleanup functions even after arenas_cache is torn down.
44 - Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.
45 - Fix chunk purge hook calls for in-place huge shrinking reallocation to
46 specify the old chunk size rather than the new chunk size. This bug caused
47 no correctness issues for the default chunk purge function, but was
48 visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl.
49 - Fix heap profiling bugs:
50 + Fix heap profiling to distinguish among otherwise identical sample sites
51 with interposed resets (triggered via the "prof.reset" mallctl). This bug
52 could cause data structure corruption that would most likely result in a
53 segfault.
54 + Fix irealloc_prof() to prof_alloc_rollback() on OOM.
55 + Make one call to prof_active_get_unlocked() per allocation event, and use
56 the result throughout the relevant functions that handle an allocation
57 event. Also add a missing check in prof_realloc(). These fixes protect
58 allocation events against concurrent prof_active changes.
59 + Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()
60 in the correct order.
61 + Fix prof_realloc() to call prof_free_sampled_object() after calling
62 prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were
63 the same, the tctx could have been prematurely destroyed.
64 - Fix portability bugs:
65 + Don't bitshift by negative amounts when encoding/decoding run sizes in
66 chunk header maps. This affected systems with page sizes greater than 8
67 KiB.
68 + Rename index_t to szind_t to avoid an existing type on Solaris.
69 + Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
70 match glibc and avoid compilation errors when including both
71 jemalloc/jemalloc.h and malloc.h in C++ code.
72 + Don't assume that /bin/sh is appropriate when running size_classes.sh
73 during configuration.
74 + Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.
75 + Link tests to librt if it contains clock_gettime(2).
76
7* 4.0.0 (August 17, 2015)
8
9 This version contains many speed and space optimizations, both minor and
10 major. The major themes are generalization, unification, and simplification.
11 Although many of these optimizations cause no visible behavior change, their
12 cumulative effect is substantial.
13
14 New features:
15 - Normalize size class spacing to be consistent across the complete size
16 range. By default there are four size classes per size doubling, but this
17 is now configurable via the --with-lg-size-class-group option. Also add the
18 --with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and
19 --with-lg-tiny-min options, which can be used to tweak page and size class
20 settings. Impacts:
21 + Worst case performance for incrementally growing/shrinking reallocation
22 is improved because there are far fewer size classes, and therefore
23 copying happens less often.
24 + Internal fragmentation is limited to 20% for all but the smallest size
25 classes (those less than four times the quantum). (1B + 4 KiB)
26 and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.
27 + Chunk fragmentation tends to be lower because there are fewer distinct run
28 sizes to pack.
29 - Add support for explicit tcaches. The "tcache.create", "tcache.flush", and
30 "tcache.destroy" mallctls control tcache lifetime and flushing, and the
31 MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API
32 control which tcache is used for each operation.
33 - Implement per thread heap profiling, as well as the ability to
34 enable/disable heap profiling on a per thread basis. Add the "prof.reset",
35 "prof.lg_sample", "thread.prof.name", "thread.prof.active",
36 "opt.prof_thread_active_init", "prof.thread_active_init", and
37 "thread.prof.active" mallctls.
38 - Add support for per arena application-specified chunk allocators, configured
39 via the "arena.<i>.chunk_hooks" mallctl.
40 - Refactor huge allocation to be managed by arenas, so that arenas now
41 function as general purpose independent allocators. This is important in
42 the context of user-specified chunk allocators, aside from the scalability
43 benefits. Related new statistics:
44 + The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
45 "stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
46 mallctls provide high level per arena huge allocation statistics.
47 + The "arenas.nhchunks", "arenas.hchunk.<i>.size",
48 "stats.arenas.<i>.hchunks.<j>.nmalloc",
49 "stats.arenas.<i>.hchunks.<j>.ndalloc",
50 "stats.arenas.<i>.hchunks.<j>.nrequests", and
51 "stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class
52 statistics.
53 - Add the 'util' column to malloc_stats_print() output, which reports the
54 proportion of available regions that are currently in use for each small
55 size class.
56 - Add "alloc" and "free" modes for for junk filling (see the "opt.junk"
57 mallctl), so that it is possible to separately enable junk filling for
58 allocation versus deallocation.
59 - Add the jemalloc-config script, which provides information about how
60 jemalloc was configured, and how to integrate it into application builds.
61 - Add metadata statistics, which are accessible via the "stats.metadata",
62 "stats.arenas.<i>.metadata.mapped", and
63 "stats.arenas.<i>.metadata.allocated" mallctls.
64 - Add the "stats.resident" mallctl, which reports the upper limit of
65 physically resident memory mapped by the allocator.
66 - Add per arena control over unused dirty page purging, via the
67 "arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and
68 "stats.arenas.<i>.lg_dirty_mult" mallctls.
69 - Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump
70 feature on/off during program execution.
71 - Add sdallocx(), which implements sized deallocation. The primary
72 optimization over dallocx() is the removal of a metadata read, which often
73 suffers an L1 cache miss.
74 - Add missing header includes in jemalloc/jemalloc.h, so that applications
75 only have to #include <jemalloc/jemalloc.h>.
76 - Add support for additional platforms:
77 + Bitrig
78 + Cygwin
79 + DragonFlyBSD
80 + iOS
81 + OpenBSD
82 + OpenRISC/or1k
83
84 Optimizations:
85 - Maintain dirty runs in per arena LRUs rather than in per arena trees of
86 dirty-run-containing chunks. In practice this change significantly reduces
87 dirty page purging volume.
88 - Integrate whole chunks into the unused dirty page purging machinery. This
89 reduces the cost of repeated huge allocation/deallocation, because it
90 effectively introduces a cache of chunks.
91 - Split the arena chunk map into two separate arrays, in order to increase
92 cache locality for the frequently accessed bits.
93 - Move small run metadata out of runs, into arena chunk headers. This reduces
94 run fragmentation, smaller runs reduce external fragmentation for small size
95 classes, and packed (less uniformly aligned) metadata layout improves CPU
96 cache set distribution.
97 - Randomly distribute large allocation base pointer alignment relative to page
98 boundaries in order to more uniformly utilize CPU cache sets. This can be
99 disabled via the --disable-cache-oblivious configure option, and queried via
100 the "config.cache_oblivious" mallctl.
101 - Micro-optimize the fast paths for the public API functions.
102 - Refactor thread-specific data to reside in a single structure. This assures
103 that only a single TLS read is necessary per call into the public API.
104 - Implement in-place huge allocation growing and shrinking.
105 - Refactor rtree (radix tree for chunk lookups) to be lock-free, and make
106 additional optimizations that reduce maximum lookup depth to one or two
107 levels. This resolves what was a concurrency bottleneck for per arena huge
108 allocation, because a global data structure is critical for determining
109 which arenas own which huge allocations.
110
111 Incompatible changes:
112 - Replace --enable-cc-silence with --disable-cc-silence to suppress spurious
113 warnings by default.
114 - Assure that the constness of malloc_usable_size()'s return type matches that
115 of the system implementation.
116 - Change the heap profile dump format to support per thread heap profiling,
117 rename pprof to jeprof, and enhance it with the --thread=<n> option. As a
118 result, the bundled jeprof must now be used rather than the upstream
119 (gperftools) pprof.
120 - Disable "opt.prof_final" by default, in order to avoid atexit(3), which can
121 internally deadlock on some platforms.
122 - Change the "arenas.nlruns" mallctl type from size_t to unsigned.
123 - Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with
124 "stats.arenas.<i>.bins.<j>.curregs".
125 - Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
126 - Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the
127 MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.
128
129 Removed features:
130 - Remove the *allocm() API, which is superseded by the *allocx() API.
131 - Remove the --enable-dss options, and make dss non-optional on all platforms
132 which support sbrk(2).
133 - Remove the "arenas.purge" mallctl, which was obsoleted by the
134 "arena.<i>.purge" mallctl in 3.1.0.
135 - Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically
136 detects whether it is running inside Valgrind.
137 - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
138 "stats.huge.ndalloc" mallctls.
139 - Remove the --enable-mremap option.
140 - Remove the "stats.chunks.current", "stats.chunks.total", and
141 "stats.chunks.high" mallctls.
142
143 Bug fixes:
144 - Fix the cactive statistic to decrease (rather than increase) when active
145 memory decreases. This regression was first released in 3.5.0.
146 - Fix OOM handling in memalign() and valloc(). A variant of this bug existed
147 in all releases since 2.0.0, which introduced these functions.
148 - Fix an OOM-related regression in arena_tcache_fill_small(), which could
149 cause cache corruption on OOM. This regression was present in all releases
150 from 2.2.0 through 3.6.0.
151 - Fix size class overflow handling for malloc(), posix_memalign(), memalign(),
152 calloc(), and realloc() when profiling is enabled.
153 - Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
154 "secondary" precedence is specified, but sbrk(2) is not supported.
155 - Fix fallback lg_floor() implementations to handle extremely large inputs.
156 - Ensure the default purgeable zone is after the default zone on OS X.
157 - Fix latent bugs in atomic_*().
158 - Fix the "arena.<i>.dss" mallctl to handle read-only calls.
159 - Fix tls_model configuration to enable the initial-exec model when possible.
160 - Mark malloc_conf as a weak symbol so that the application can override it.
161 - Correctly detect glibc's adaptive pthread mutexes.
162 - Fix the --without-export configure option.
163
164* 3.6.0 (March 31, 2014)
165
166 This version contains a critical bug fix for a regression present in 3.5.0 and
167 3.5.1.
168
169 Bug fixes:
170 - Fix a regression in arena_chunk_alloc() that caused crashes during
171 small/large allocation if chunk allocation failed. In the absence of this
172 bug, chunk allocation failure would result in allocation failure, e.g. NULL
173 return from malloc(). This regression was introduced in 3.5.0.
174 - Fix backtracing for gcc intrinsics-based backtracing by specifying
175 -fno-omit-frame-pointer to gcc. Note that the application (and all the
176 libraries it links to) must also be compiled with this option for
177 backtracing to be reliable.
178 - Use dss allocation precedence for huge allocations as well as small/large
179 allocations.
180 - Fix test assertion failure message formatting. This bug did not manifest on
181 x86_64 systems because of implementation subtleties in va_list.
182 - Fix inconsequential test failures for hash and SFMT code.
183
184 New features:
185 - Support heap profiling on FreeBSD. This feature depends on the proc
186 filesystem being mounted during heap profile dumping.
187
188* 3.5.1 (February 25, 2014)
189
190 This version primarily addresses minor bugs in test code.
191
192 Bug fixes:
193 - Configure Solaris/Illumos to use MADV_FREE.
194 - Fix junk filling for mremap(2)-based huge reallocation. This is only
195 relevant if configuring with the --enable-mremap option specified.
196 - Avoid compilation failure if 'restrict' C99 keyword is not supported by the
197 compiler.
198 - Add a configure test for SSE2 rather than assuming it is usable on i686
199 systems. This fixes test compilation errors, especially on 32-bit Linux
200 systems.
201 - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit
202 test.
203 - Fix/remove flawed alignment-related overflow tests.
204 - Prevent compiler optimizations that could change backtraces in the
205 prof_accum unit test.
206
207* 3.5.0 (January 22, 2014)
208
209 This version focuses on refactoring and automated testing, though it also
210 includes some non-trivial heap profiling optimizations not mentioned below.
211
212 New features:
213 - Add the *allocx() API, which is a successor to the experimental *allocm()
214 API. The *allocx() functions are slightly simpler to use because they have
215 fewer parameters, they directly return the results of primary interest, and
216 mallocx()/rallocx() avoid the strict aliasing pitfall that
217 allocm()/rallocm() share with posix_memalign(). Note that *allocm() is
218 slated for removal in the next non-bugfix release.
219 - Add support for LinuxThreads.
220
221 Bug fixes:
222 - Unless heap profiling is enabled, disable floating point code and don't link
223 with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64
224 systems, makes it possible to completely disable floating point register
225 use. Some versions of glibc neglect to save/restore caller-saved floating
226 point registers during dynamic lazy symbol loading, and the symbol loading
227 code uses whatever malloc the application happens to have linked/loaded
228 with, the result being potential floating point register corruption.
229 - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling
230 backtrace creation in imemalign(). This bug impacted posix_memalign() and
231 aligned_alloc().
232 - Fix a file descriptor leak in a prof_dump_maps() error path.
233 - Fix prof_dump() to close the dump file descriptor for all relevant error
234 paths.
235 - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for
236 allocation, not just deallocation.
237 - Fix a data race for large allocation stats counters.
238 - Fix a potential infinite loop during thread exit. This bug occurred on
239 Solaris, and could affect other platforms with similar pthreads TSD
240 implementations.
241 - Don't junk-fill reallocations unless usable size changes. This fixes a
242 violation of the *allocx()/*allocm() semantics.
243 - Fix growing large reallocation to junk fill new space.
244 - Fix huge deallocation to junk fill when munmap is disabled.
245 - Change the default private namespace prefix from empty to je_, and change
246 --with-private-namespace-prefix so that it prepends an additional prefix
247 rather than replacing je_. This reduces the likelihood of applications
248 which statically link jemalloc experiencing symbol name collisions.
249 - Add missing private namespace mangling (relevant when
250 --with-private-namespace is specified).
251 - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as
252 static even for debug builds.
253 - Add a missing mutex unlock in a malloc_init_hard() error path. In practice
254 this error path is never executed.
255 - Fix numerous bugs in malloc_strotumax() error handling/reporting. These
256 bugs had no impact except for malformed inputs.
257 - Fix numerous bugs in malloc_snprintf(). These bugs were not exercised by
258 existing calls, so they had no impact.
259
260* 3.4.1 (October 20, 2013)
261
262 Bug fixes:
263 - Fix a race in the "arenas.extend" mallctl that could cause memory corruption
264 of internal data structures and subsequent crashes.
265 - Fix Valgrind integration flaws that caused Valgrind warnings about reads of
266 uninitialized memory in:
267 + arena chunk headers
268 + internal zero-initialized data structures (relevant to tcache and prof
269 code)
270 - Preserve errno during the first allocation. A readlink(2) call during
271 initialization fails unless /etc/malloc.conf exists, so errno was typically
272 set during the first allocation prior to this fix.
273 - Fix compilation warnings reported by gcc 4.8.1.
274
275* 3.4.0 (June 2, 2013)
276
277 This version is essentially a small bugfix release, but the addition of
278 aarch64 support requires that the minor version be incremented.
279
280 Bug fixes:
281 - Fix race-triggered deadlocks in chunk_record(). These deadlocks were
282 typically triggered by multiple threads concurrently deallocating huge
283 objects.
284
285 New features:
286 - Add support for the aarch64 architecture.
287
288* 3.3.1 (March 6, 2013)
289
290 This version fixes bugs that are typically encountered only when utilizing
291 custom run-time options.
292
293 Bug fixes:
294 - Fix a locking order bug that could cause deadlock during fork if heap
295 profiling were enabled.
296 - Fix a chunk recycling bug that could cause the allocator to lose track of
297 whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause
298 corruption if allocating via sbrk(2) (unlikely unless running with the
299 "dss:primary" option specified). This was completely harmless on Linux
300 unless using mlockall(2) (and unlikely even then, unless the
301 --disable-munmap configure option or the "dss:primary" option was
302 specified). This regression was introduced in 3.1.0 by the
303 mlockall(2)/madvise(2) interaction fix.
304 - Fix TLS-related memory corruption that could occur during thread exit if the
305 thread never allocated memory. Only the quarantine and prof facilities were
306 susceptible.
307 - Fix two quarantine bugs:
308 + Internal reallocation of the quarantined object array leaked the old
309 array.
310 + Reallocation failure for internal reallocation of the quarantined object
311 array (very unlikely) resulted in memory corruption.
312 - Fix Valgrind integration to annotate all internally allocated memory in a
313 way that keeps Valgrind happy about internal data structure access.
314 - Fix building for s390 systems.
315
316* 3.3.0 (January 23, 2013)
317
318 This version includes a few minor performance improvements in addition to the
319 listed new features and bug fixes.
320
321 New features:
322 - Add clipping support to lg_chunk option processing.
323 - Add the --enable-ivsalloc option.
324 - Add the --without-export option.
325 - Add the --disable-zone-allocator option.
326
327 Bug fixes:
328 - Fix "arenas.extend" mallctl to output the number of arenas.
329 - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory
330 is undefined.
331 - Fix build break on FreeBSD related to alloca.h.
332
333* 3.2.0 (November 9, 2012)
334
335 In addition to a couple of bug fixes, this version modifies page run
336 allocation and dirty page purging algorithms in order to better control
337 page-level virtual memory fragmentation.
338
339 Incompatible changes:
340 - Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1).
341
342 Bug fixes:
343 - Fix dss/mmap allocation precedence code to use recyclable mmap memory only
344 after primary dss allocation fails.
345 - Fix deadlock in the "arenas.purge" mallctl. This regression was introduced
346 in 3.1.0 by the addition of the "arena.<i>.purge" mallctl.
347
348* 3.1.0 (October 16, 2012)
349
350 New features:
351 - Auto-detect whether running inside Valgrind, thus removing the need to
352 manually specify MALLOC_CONF=valgrind:true.
353 - Add the "arenas.extend" mallctl, which allows applications to create
354 manually managed arenas.
355 - Add the ALLOCM_ARENA() flag for {,r,d}allocm().
356 - Add the "opt.dss", "arena.<i>.dss", and "stats.arenas.<i>.dss" mallctls,
357 which provide control over dss/mmap precedence.
358 - Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
359 - Define LG_QUANTUM for hppa.
360
361 Incompatible changes:
362 - Disable tcache by default if running inside Valgrind, in order to avoid
363 making unallocated objects appear reachable to Valgrind.
364 - Drop const from malloc_usable_size() argument on Linux.
365
366 Bug fixes:
367 - Fix heap profiling crash if sampled object is freed via realloc(p, 0).
368 - Remove const from __*_hook variable declarations, so that glibc can modify
369 them during process forking.
370 - Fix mlockall(2)/madvise(2) interaction.
371 - Fix fork(2)-related deadlocks.
372 - Fix error return value for "thread.tcache.enabled" mallctl.
373
374* 3.0.0 (May 11, 2012)
375
376 Although this version adds some major new features, the primary focus is on
377 internal code cleanup that facilitates maintainability and portability, most
378 of which is not reflected in the ChangeLog. This is the first release to
379 incorporate substantial contributions from numerous other developers, and the
380 result is a more broadly useful allocator (see the git revision history for
381 contribution details). Note that the license has been unified, thanks to
382 Facebook granting a license under the same terms as the other copyright
383 holders (see COPYING).
384
385 New features:
386 - Implement Valgrind support, redzones, and quarantine.
387 - Add support for additional platforms:
388 + FreeBSD
389 + Mac OS X Lion
390 + MinGW
391 + Windows (no support yet for replacing the system malloc)
392 - Add support for additional architectures:
393 + MIPS
394 + SH4
395 + Tilera
396 - Add support for cross compiling.
397 - Add nallocm(), which rounds a request size up to the nearest size class
398 without actually allocating.
399 - Implement aligned_alloc() (blame C11).
400 - Add the "thread.tcache.enabled" mallctl.
401 - Add the "opt.prof_final" mallctl.
402 - Update pprof (from gperftools 2.0).
403 - Add the --with-mangling option.
404 - Add the --disable-experimental option.
405 - Add the --disable-munmap option, and make it the default on Linux.
406 - Add the --enable-mremap option, which disables use of mremap(2) by default.
407
408 Incompatible changes:
409 - Enable stats by default.
410 - Enable fill by default.
411 - Disable lazy locking by default.
412 - Rename the "tcache.flush" mallctl to "thread.tcache.flush".
413 - Rename the "arenas.pagesize" mallctl to "arenas.page".
414 - Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
415 - Change the "opt.prof_accum" default from true to false.
416
417 Removed features:
418 - Remove the swap feature, including the "config.swap", "swap.avail",
419 "swap.prezeroed", "swap.nfds", and "swap.fds" mallctls.
420 - Remove highruns statistics, including the
421 "stats.arenas.<i>.bins.<j>.highruns" and
422 "stats.arenas.<i>.lruns.<j>.highruns" mallctls.
423 - As part of small size class refactoring, remove the "opt.lg_[qc]space_max",
424 "arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and
425 "arenas.[tqcs]bins" mallctls.
426 - Remove the "arenas.chunksize" mallctl.
427 - Remove the "opt.lg_prof_tcmax" option.
428 - Remove the "opt.lg_prof_bt_max" option.
429 - Remove the "opt.lg_tcache_gc_sweep" option.
430 - Remove the --disable-tiny option, including the "config.tiny" mallctl.
431 - Remove the --enable-dynamic-page-shift configure option.
432 - Remove the --enable-sysv configure option.
433
434 Bug fixes:
435 - Fix a statistics-related bug in the "thread.arena" mallctl that could cause
436 invalid statistics and crashes.
437 - Work around TLS deallocation via free() on Linux. This bug could cause
438 write-after-free memory corruption.
439 - Fix a potential deadlock that could occur during interval- and
440 growth-triggered heap profile dumps.
441 - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags.
442 - Fix chunk_alloc_dss() to stop claiming memory is zeroed. This bug could
443 cause memory corruption and crashes with --enable-dss specified.
444 - Fix fork-related bugs that could cause deadlock in children between fork
445 and exec.
446 - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.
447 - Fix realloc(p, 0) to act like free(p).
448 - Do not enforce minimum alignment in memalign().
449 - Check for NULL pointer in malloc_usable_size().
450 - Fix an off-by-one heap profile statistics bug that could be observed in
451 interval- and growth-triggered heap profiles.
452 - Fix the "epoch" mallctl to update cached stats even if the passed in epoch
453 is 0.
454 - Fix bin->runcur management to fix a layout policy bug. This bug did not
455 affect correctness.
456 - Fix a bug in choose_arena_hard() that potentially caused more arenas to be
457 initialized than necessary.
458 - Add missing "opt.lg_tcache_max" mallctl implementation.
459 - Use glibc allocator hooks to make mixed allocator usage less likely.
460 - Fix build issues for --disable-tcache.
461 - Don't mangle pthread_create() when --with-private-namespace is specified.
462
463* 2.2.5 (November 14, 2011)
464
465 Bug fixes:
466 - Fix huge_ralloc() race when using mremap(2). This is a serious bug that
467 could cause memory corruption and/or crashes.
468 - Fix huge_ralloc() to maintain chunk statistics.
469 - Fix malloc_stats_print(..., "a") output.
470
471* 2.2.4 (November 5, 2011)
472
473 Bug fixes:
474 - Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as
475 well as for --disable-tls builds in earlier releases.
476 - Do not assume a 4 KiB page size in test/rallocm.c.
477
478* 2.2.3 (August 31, 2011)
479
480 This version fixes numerous bugs related to heap profiling.
481
482 Bug fixes:
483 - Fix a prof-related race condition. This bug could cause memory corruption,
484 but only occurred in non-default configurations (prof_accum:false).
485 - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is
486 excluded from backtraces).
487 - Fix a prof-related bug in realloc() (only triggered by OOM errors).
488 - Fix prof-related bugs in allocm() and rallocm().
489 - Fix prof_tdata_cleanup() for --disable-tls builds.
490 - Fix a relative include path, to fix objdir builds.
491
492* 2.2.2 (July 30, 2011)
493
494 Bug fixes:
495 - Fix a build error for --disable-tcache.
496 - Fix assertions in arena_purge() (for real this time).
497 - Add the --with-private-namespace option. This is a workaround for symbol
498 conflicts that can inadvertently arise when using static libraries.
499
500* 2.2.1 (March 30, 2011)
501
502 Bug fixes:
503 - Implement atomic operations for x86/x64. This fixes compilation failures
504 for versions of gcc that are still in wide use.
505 - Fix an assertion in arena_purge().
506
507* 2.2.0 (March 22, 2011)
508
509 This version incorporates several improvements to algorithms and data
510 structures that tend to reduce fragmentation and increase speed.
511
512 New features:
513 - Add the "stats.cactive" mallctl.
514 - Update pprof (from google-perftools 1.7).
515 - Improve backtracing-related configuration logic, and add the
516 --disable-prof-libgcc option.
517
518 Bug fixes:
519 - Change default symbol visibility from "internal", to "hidden", which
520 decreases the overhead of library-internal function calls.
521 - Fix symbol visibility so that it is also set on OS X.
522 - Fix a build dependency regression caused by the introduction of the .pic.o
523 suffix for PIC object files.
524 - Add missing checks for mutex initialization failures.
525 - Don't use libgcc-based backtracing except on x64, where it is known to work.
526 - Fix deadlocks on OS X that were due to memory allocation in
527 pthread_mutex_lock().
528 - Heap profiling-specific fixes:
529 + Fix memory corruption due to integer overflow in small region index
530 computation, when using a small enough sample interval that profiling
531 context pointers are stored in small run headers.
532 + Fix a bootstrap ordering bug that only occurred with TLS disabled.
533 + Fix a rallocm() rsize bug.
534 + Fix error detection bugs for aligned memory allocation.
535
536* 2.1.3 (March 14, 2011)
537
538 Bug fixes:
539 - Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix
540 for OS X in 2.1.2).
541 - Fix a "thread.arena" mallctl bug.
542 - Fix a thread cache stats merging bug.
543
544* 2.1.2 (March 2, 2011)
545
546 Bug fixes:
547 - Fix "thread.{de,}allocatedp" mallctl for OS X.
548 - Add missing jemalloc.a to build system.
549
550* 2.1.1 (January 31, 2011)
551
552 Bug fixes:
553 - Fix aligned huge reallocation (affected allocm()).
554 - Fix the ALLOCM_LG_ALIGN macro definition.
555 - Fix a heap dumping deadlock.
556 - Fix a "thread.arena" mallctl bug.
557
558* 2.1.0 (December 3, 2010)
559
560 This version incorporates some optimizations that can't quite be considered
561 bug fixes.
562
563 New features:
564 - Use Linux's mremap(2) for huge object reallocation when possible.
565 - Avoid locking in mallctl*() when possible.
566 - Add the "thread.[de]allocatedp" mallctl's.
567 - Convert the manual page source from roff to DocBook, and generate both roff
568 and HTML manuals.
569
570 Bug fixes:
571 - Fix a crash due to incorrect bootstrap ordering. This only impacted
572 --enable-debug --enable-dss configurations.
573 - Fix a minor statistics bug for mallctl("swap.avail", ...).
574
575* 2.0.1 (October 29, 2010)
576
577 Bug fixes:
578 - Fix a race condition in heap profiling that could cause undefined behavior
579 if "opt.prof_accum" were disabled.
580 - Add missing mutex unlocks for some OOM error paths in the heap profiling
581 code.
582 - Fix a compilation error for non-C99 builds.
583
584* 2.0.0 (October 24, 2010)
585
586 This version focuses on the experimental *allocm() API, and on improved
587 run-time configuration/introspection. Nonetheless, numerous performance
588 improvements are also included.
589
590 New features:
591 - Implement the experimental {,r,s,d}allocm() API, which provides a superset
592 of the functionality available via malloc(), calloc(), posix_memalign(),
593 realloc(), malloc_usable_size(), and free(). These functions can be used to
594 allocate/reallocate aligned zeroed memory, ask for optional extra memory
595 during reallocation, prevent object movement during reallocation, etc.
596 - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is
597 more human-readable, and more flexible. For example:
598 JEMALLOC_OPTIONS=AJP
599 is now:
600 MALLOC_CONF=abort:true,fill:true,stats_print:true
601 - Port to Apple OS X. Sponsored by Mozilla.
602 - Make it possible for the application to control thread-->arena mappings via
603 the "thread.arena" mallctl.
604 - Add compile-time support for all TLS-related functionality via pthreads TSD.
605 This is mainly of interest for OS X, which does not support TLS, but has a
606 TSD implementation with similar performance.
607 - Override memalign() and valloc() if they are provided by the system.
608 - Add the "arenas.purge" mallctl, which can be used to synchronously purge all
609 dirty unused pages.
610 - Make cumulative heap profiling data optional, so that it is possible to
611 limit the amount of memory consumed by heap profiling data structures.
612 - Add per thread allocation counters that can be accessed via the
613 "thread.allocated" and "thread.deallocated" mallctls.
614
615 Incompatible changes:
616 - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above).
617 - Increase default backtrace depth from 4 to 128 for heap profiling.
618 - Disable interval-based profile dumps by default.
619
620 Bug fixes:
621 - Remove bad assertions in fork handler functions. These assertions could
622 cause aborts for some combinations of configure settings.
623 - Fix strerror_r() usage to deal with non-standard semantics in GNU libc.
624 - Fix leak context reporting. This bug tended to cause the number of contexts
625 to be underreported (though the reported number of objects and bytes were
626 correct).
627 - Fix a realloc() bug for large in-place growing reallocation. This bug could
628 cause memory corruption, but it was hard to trigger.
629 - Fix an allocation bug for small allocations that could be triggered if
630 multiple threads raced to create a new run of backing pages.
631 - Enhance the heap profiler to trigger samples based on usable size, rather
632 than request size.
633 - Fix a heap profiling bug due to sometimes losing track of requested object
634 size for sampled objects.
635
636* 1.0.3 (August 12, 2010)
637
638 Bug fixes:
639 - Fix the libunwind-based implementation of stack backtracing (used for heap
640 profiling). This bug could cause zero-length backtraces to be reported.
641 - Add a missing mutex unlock in library initialization code. If multiple
642 threads raced to initialize malloc, some of them could end up permanently
643 blocked.
644
645* 1.0.2 (May 11, 2010)
646
647 Bug fixes:
648 - Fix junk filling of large objects, which could cause memory corruption.
649 - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual
650 memory limits could cause swap file configuration to fail. Contributed by
651 Jordan DeLong.
652
653* 1.0.1 (April 14, 2010)
654
655 Bug fixes:
656 - Fix compilation when --enable-fill is specified.
657 - Fix threads-related profiling bugs that affected accuracy and caused memory
658 to be leaked during thread exit.
659 - Fix dirty page purging race conditions that could cause crashes.
660 - Fix crash in tcache flushing code during thread destruction.
661
662* 1.0.0 (April 11, 2010)
663
664 This release focuses on speed and run-time introspection. Numerous
665 algorithmic improvements make this release substantially faster than its
666 predecessors.
667
668 New features:
669 - Implement autoconf-based configuration system.
670 - Add mallctl*(), for the purposes of introspection and run-time
671 configuration.
672 - Make it possible for the application to manually flush a thread's cache, via
673 the "tcache.flush" mallctl.
674 - Base maximum dirty page count on proportion of active memory.
675 - Compute various additional run-time statistics, including per size class
676 statistics for large objects.
677 - Expose malloc_stats_print(), which can be called repeatedly by the
678 application.
679 - Simplify the malloc_message() signature to only take one string argument,
680 and incorporate an opaque data pointer argument for use by the application
681 in combination with malloc_stats_print().
682 - Add support for allocation backed by one or more swap files, and allow the
683 application to disable over-commit if swap files are in use.
684 - Implement allocation profiling and leak checking.
685
686 Removed features:
687 - Remove the dynamic arena rebalancing code, since thread-specific caching
688 reduces its utility.
689
690 Bug fixes:
691 - Modify chunk allocation to work when address space layout randomization
692 (ASLR) is in use.
693 - Fix thread cleanup bugs related to TLS destruction.
694 - Handle 0-size allocation requests in posix_memalign().
695 - Fix a chunk leak. The leaked chunks were never touched, so this impacted
696 virtual memory usage, but not physical memory usage.
697
698* linux_2008082[78]a (August 27/28, 2008)
699
700 These snapshot releases are the simple result of incorporating Linux-specific
701 support into the FreeBSD malloc sources.
702
703--------------------------------------------------------------------------------
704vim:filetype=text:textwidth=80
77* 4.0.0 (August 17, 2015)
78
79 This version contains many speed and space optimizations, both minor and
80 major. The major themes are generalization, unification, and simplification.
81 Although many of these optimizations cause no visible behavior change, their
82 cumulative effect is substantial.
83
84 New features:
85 - Normalize size class spacing to be consistent across the complete size
86 range. By default there are four size classes per size doubling, but this
87 is now configurable via the --with-lg-size-class-group option. Also add the
88 --with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and
89 --with-lg-tiny-min options, which can be used to tweak page and size class
90 settings. Impacts:
91 + Worst case performance for incrementally growing/shrinking reallocation
92 is improved because there are far fewer size classes, and therefore
93 copying happens less often.
94 + Internal fragmentation is limited to 20% for all but the smallest size
95 classes (those less than four times the quantum). (1B + 4 KiB)
96 and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.
97 + Chunk fragmentation tends to be lower because there are fewer distinct run
98 sizes to pack.
99 - Add support for explicit tcaches. The "tcache.create", "tcache.flush", and
100 "tcache.destroy" mallctls control tcache lifetime and flushing, and the
101 MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API
102 control which tcache is used for each operation.
103 - Implement per thread heap profiling, as well as the ability to
104 enable/disable heap profiling on a per thread basis. Add the "prof.reset",
105 "prof.lg_sample", "thread.prof.name", "thread.prof.active",
106 "opt.prof_thread_active_init", "prof.thread_active_init", and
107 "thread.prof.active" mallctls.
108 - Add support for per arena application-specified chunk allocators, configured
109 via the "arena.<i>.chunk_hooks" mallctl.
110 - Refactor huge allocation to be managed by arenas, so that arenas now
111 function as general purpose independent allocators. This is important in
112 the context of user-specified chunk allocators, aside from the scalability
113 benefits. Related new statistics:
114 + The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
115 "stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
116 mallctls provide high level per arena huge allocation statistics.
117 + The "arenas.nhchunks", "arenas.hchunk.<i>.size",
118 "stats.arenas.<i>.hchunks.<j>.nmalloc",
119 "stats.arenas.<i>.hchunks.<j>.ndalloc",
120 "stats.arenas.<i>.hchunks.<j>.nrequests", and
121 "stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class
122 statistics.
123 - Add the 'util' column to malloc_stats_print() output, which reports the
124 proportion of available regions that are currently in use for each small
125 size class.
126 - Add "alloc" and "free" modes for for junk filling (see the "opt.junk"
127 mallctl), so that it is possible to separately enable junk filling for
128 allocation versus deallocation.
129 - Add the jemalloc-config script, which provides information about how
130 jemalloc was configured, and how to integrate it into application builds.
131 - Add metadata statistics, which are accessible via the "stats.metadata",
132 "stats.arenas.<i>.metadata.mapped", and
133 "stats.arenas.<i>.metadata.allocated" mallctls.
134 - Add the "stats.resident" mallctl, which reports the upper limit of
135 physically resident memory mapped by the allocator.
136 - Add per arena control over unused dirty page purging, via the
137 "arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and
138 "stats.arenas.<i>.lg_dirty_mult" mallctls.
139 - Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump
140 feature on/off during program execution.
141 - Add sdallocx(), which implements sized deallocation. The primary
142 optimization over dallocx() is the removal of a metadata read, which often
143 suffers an L1 cache miss.
144 - Add missing header includes in jemalloc/jemalloc.h, so that applications
145 only have to #include <jemalloc/jemalloc.h>.
146 - Add support for additional platforms:
147 + Bitrig
148 + Cygwin
149 + DragonFlyBSD
150 + iOS
151 + OpenBSD
152 + OpenRISC/or1k
153
154 Optimizations:
155 - Maintain dirty runs in per arena LRUs rather than in per arena trees of
156 dirty-run-containing chunks. In practice this change significantly reduces
157 dirty page purging volume.
158 - Integrate whole chunks into the unused dirty page purging machinery. This
159 reduces the cost of repeated huge allocation/deallocation, because it
160 effectively introduces a cache of chunks.
161 - Split the arena chunk map into two separate arrays, in order to increase
162 cache locality for the frequently accessed bits.
163 - Move small run metadata out of runs, into arena chunk headers. This reduces
164 run fragmentation, smaller runs reduce external fragmentation for small size
165 classes, and packed (less uniformly aligned) metadata layout improves CPU
166 cache set distribution.
167 - Randomly distribute large allocation base pointer alignment relative to page
168 boundaries in order to more uniformly utilize CPU cache sets. This can be
169 disabled via the --disable-cache-oblivious configure option, and queried via
170 the "config.cache_oblivious" mallctl.
171 - Micro-optimize the fast paths for the public API functions.
172 - Refactor thread-specific data to reside in a single structure. This assures
173 that only a single TLS read is necessary per call into the public API.
174 - Implement in-place huge allocation growing and shrinking.
175 - Refactor rtree (radix tree for chunk lookups) to be lock-free, and make
176 additional optimizations that reduce maximum lookup depth to one or two
177 levels. This resolves what was a concurrency bottleneck for per arena huge
178 allocation, because a global data structure is critical for determining
179 which arenas own which huge allocations.
180
181 Incompatible changes:
182 - Replace --enable-cc-silence with --disable-cc-silence to suppress spurious
183 warnings by default.
184 - Assure that the constness of malloc_usable_size()'s return type matches that
185 of the system implementation.
186 - Change the heap profile dump format to support per thread heap profiling,
187 rename pprof to jeprof, and enhance it with the --thread=<n> option. As a
188 result, the bundled jeprof must now be used rather than the upstream
189 (gperftools) pprof.
190 - Disable "opt.prof_final" by default, in order to avoid atexit(3), which can
191 internally deadlock on some platforms.
192 - Change the "arenas.nlruns" mallctl type from size_t to unsigned.
193 - Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with
194 "stats.arenas.<i>.bins.<j>.curregs".
195 - Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
196 - Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the
197 MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.
198
199 Removed features:
200 - Remove the *allocm() API, which is superseded by the *allocx() API.
201 - Remove the --enable-dss options, and make dss non-optional on all platforms
202 which support sbrk(2).
203 - Remove the "arenas.purge" mallctl, which was obsoleted by the
204 "arena.<i>.purge" mallctl in 3.1.0.
205 - Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically
206 detects whether it is running inside Valgrind.
207 - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
208 "stats.huge.ndalloc" mallctls.
209 - Remove the --enable-mremap option.
210 - Remove the "stats.chunks.current", "stats.chunks.total", and
211 "stats.chunks.high" mallctls.
212
213 Bug fixes:
214 - Fix the cactive statistic to decrease (rather than increase) when active
215 memory decreases. This regression was first released in 3.5.0.
216 - Fix OOM handling in memalign() and valloc(). A variant of this bug existed
217 in all releases since 2.0.0, which introduced these functions.
218 - Fix an OOM-related regression in arena_tcache_fill_small(), which could
219 cause cache corruption on OOM. This regression was present in all releases
220 from 2.2.0 through 3.6.0.
221 - Fix size class overflow handling for malloc(), posix_memalign(), memalign(),
222 calloc(), and realloc() when profiling is enabled.
223 - Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
224 "secondary" precedence is specified, but sbrk(2) is not supported.
225 - Fix fallback lg_floor() implementations to handle extremely large inputs.
226 - Ensure the default purgeable zone is after the default zone on OS X.
227 - Fix latent bugs in atomic_*().
228 - Fix the "arena.<i>.dss" mallctl to handle read-only calls.
229 - Fix tls_model configuration to enable the initial-exec model when possible.
230 - Mark malloc_conf as a weak symbol so that the application can override it.
231 - Correctly detect glibc's adaptive pthread mutexes.
232 - Fix the --without-export configure option.
233
234* 3.6.0 (March 31, 2014)
235
236 This version contains a critical bug fix for a regression present in 3.5.0 and
237 3.5.1.
238
239 Bug fixes:
240 - Fix a regression in arena_chunk_alloc() that caused crashes during
241 small/large allocation if chunk allocation failed. In the absence of this
242 bug, chunk allocation failure would result in allocation failure, e.g. NULL
243 return from malloc(). This regression was introduced in 3.5.0.
244 - Fix backtracing for gcc intrinsics-based backtracing by specifying
245 -fno-omit-frame-pointer to gcc. Note that the application (and all the
246 libraries it links to) must also be compiled with this option for
247 backtracing to be reliable.
248 - Use dss allocation precedence for huge allocations as well as small/large
249 allocations.
250 - Fix test assertion failure message formatting. This bug did not manifest on
251 x86_64 systems because of implementation subtleties in va_list.
252 - Fix inconsequential test failures for hash and SFMT code.
253
254 New features:
255 - Support heap profiling on FreeBSD. This feature depends on the proc
256 filesystem being mounted during heap profile dumping.
257
258* 3.5.1 (February 25, 2014)
259
260 This version primarily addresses minor bugs in test code.
261
262 Bug fixes:
263 - Configure Solaris/Illumos to use MADV_FREE.
264 - Fix junk filling for mremap(2)-based huge reallocation. This is only
265 relevant if configuring with the --enable-mremap option specified.
266 - Avoid compilation failure if 'restrict' C99 keyword is not supported by the
267 compiler.
268 - Add a configure test for SSE2 rather than assuming it is usable on i686
269 systems. This fixes test compilation errors, especially on 32-bit Linux
270 systems.
271 - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit
272 test.
273 - Fix/remove flawed alignment-related overflow tests.
274 - Prevent compiler optimizations that could change backtraces in the
275 prof_accum unit test.
276
277* 3.5.0 (January 22, 2014)
278
279 This version focuses on refactoring and automated testing, though it also
280 includes some non-trivial heap profiling optimizations not mentioned below.
281
282 New features:
283 - Add the *allocx() API, which is a successor to the experimental *allocm()
284 API. The *allocx() functions are slightly simpler to use because they have
285 fewer parameters, they directly return the results of primary interest, and
286 mallocx()/rallocx() avoid the strict aliasing pitfall that
287 allocm()/rallocm() share with posix_memalign(). Note that *allocm() is
288 slated for removal in the next non-bugfix release.
289 - Add support for LinuxThreads.
290
291 Bug fixes:
292 - Unless heap profiling is enabled, disable floating point code and don't link
293 with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64
294 systems, makes it possible to completely disable floating point register
295 use. Some versions of glibc neglect to save/restore caller-saved floating
296 point registers during dynamic lazy symbol loading, and the symbol loading
297 code uses whatever malloc the application happens to have linked/loaded
298 with, the result being potential floating point register corruption.
299 - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling
300 backtrace creation in imemalign(). This bug impacted posix_memalign() and
301 aligned_alloc().
302 - Fix a file descriptor leak in a prof_dump_maps() error path.
303 - Fix prof_dump() to close the dump file descriptor for all relevant error
304 paths.
305 - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for
306 allocation, not just deallocation.
307 - Fix a data race for large allocation stats counters.
308 - Fix a potential infinite loop during thread exit. This bug occurred on
309 Solaris, and could affect other platforms with similar pthreads TSD
310 implementations.
311 - Don't junk-fill reallocations unless usable size changes. This fixes a
312 violation of the *allocx()/*allocm() semantics.
313 - Fix growing large reallocation to junk fill new space.
314 - Fix huge deallocation to junk fill when munmap is disabled.
315 - Change the default private namespace prefix from empty to je_, and change
316 --with-private-namespace-prefix so that it prepends an additional prefix
317 rather than replacing je_. This reduces the likelihood of applications
318 which statically link jemalloc experiencing symbol name collisions.
319 - Add missing private namespace mangling (relevant when
320 --with-private-namespace is specified).
321 - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as
322 static even for debug builds.
323 - Add a missing mutex unlock in a malloc_init_hard() error path. In practice
324 this error path is never executed.
325 - Fix numerous bugs in malloc_strotumax() error handling/reporting. These
326 bugs had no impact except for malformed inputs.
327 - Fix numerous bugs in malloc_snprintf(). These bugs were not exercised by
328 existing calls, so they had no impact.
329
330* 3.4.1 (October 20, 2013)
331
332 Bug fixes:
333 - Fix a race in the "arenas.extend" mallctl that could cause memory corruption
334 of internal data structures and subsequent crashes.
335 - Fix Valgrind integration flaws that caused Valgrind warnings about reads of
336 uninitialized memory in:
337 + arena chunk headers
338 + internal zero-initialized data structures (relevant to tcache and prof
339 code)
340 - Preserve errno during the first allocation. A readlink(2) call during
341 initialization fails unless /etc/malloc.conf exists, so errno was typically
342 set during the first allocation prior to this fix.
343 - Fix compilation warnings reported by gcc 4.8.1.
344
345* 3.4.0 (June 2, 2013)
346
347 This version is essentially a small bugfix release, but the addition of
348 aarch64 support requires that the minor version be incremented.
349
350 Bug fixes:
351 - Fix race-triggered deadlocks in chunk_record(). These deadlocks were
352 typically triggered by multiple threads concurrently deallocating huge
353 objects.
354
355 New features:
356 - Add support for the aarch64 architecture.
357
358* 3.3.1 (March 6, 2013)
359
360 This version fixes bugs that are typically encountered only when utilizing
361 custom run-time options.
362
363 Bug fixes:
364 - Fix a locking order bug that could cause deadlock during fork if heap
365 profiling were enabled.
366 - Fix a chunk recycling bug that could cause the allocator to lose track of
367 whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause
368 corruption if allocating via sbrk(2) (unlikely unless running with the
369 "dss:primary" option specified). This was completely harmless on Linux
370 unless using mlockall(2) (and unlikely even then, unless the
371 --disable-munmap configure option or the "dss:primary" option was
372 specified). This regression was introduced in 3.1.0 by the
373 mlockall(2)/madvise(2) interaction fix.
374 - Fix TLS-related memory corruption that could occur during thread exit if the
375 thread never allocated memory. Only the quarantine and prof facilities were
376 susceptible.
377 - Fix two quarantine bugs:
378 + Internal reallocation of the quarantined object array leaked the old
379 array.
380 + Reallocation failure for internal reallocation of the quarantined object
381 array (very unlikely) resulted in memory corruption.
382 - Fix Valgrind integration to annotate all internally allocated memory in a
383 way that keeps Valgrind happy about internal data structure access.
384 - Fix building for s390 systems.
385
386* 3.3.0 (January 23, 2013)
387
388 This version includes a few minor performance improvements in addition to the
389 listed new features and bug fixes.
390
391 New features:
392 - Add clipping support to lg_chunk option processing.
393 - Add the --enable-ivsalloc option.
394 - Add the --without-export option.
395 - Add the --disable-zone-allocator option.
396
397 Bug fixes:
398 - Fix "arenas.extend" mallctl to output the number of arenas.
399 - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory
400 is undefined.
401 - Fix build break on FreeBSD related to alloca.h.
402
403* 3.2.0 (November 9, 2012)
404
405 In addition to a couple of bug fixes, this version modifies page run
406 allocation and dirty page purging algorithms in order to better control
407 page-level virtual memory fragmentation.
408
409 Incompatible changes:
410 - Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1).
411
412 Bug fixes:
413 - Fix dss/mmap allocation precedence code to use recyclable mmap memory only
414 after primary dss allocation fails.
415 - Fix deadlock in the "arenas.purge" mallctl. This regression was introduced
416 in 3.1.0 by the addition of the "arena.<i>.purge" mallctl.
417
418* 3.1.0 (October 16, 2012)
419
420 New features:
421 - Auto-detect whether running inside Valgrind, thus removing the need to
422 manually specify MALLOC_CONF=valgrind:true.
423 - Add the "arenas.extend" mallctl, which allows applications to create
424 manually managed arenas.
425 - Add the ALLOCM_ARENA() flag for {,r,d}allocm().
426 - Add the "opt.dss", "arena.<i>.dss", and "stats.arenas.<i>.dss" mallctls,
427 which provide control over dss/mmap precedence.
428 - Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
429 - Define LG_QUANTUM for hppa.
430
431 Incompatible changes:
432 - Disable tcache by default if running inside Valgrind, in order to avoid
433 making unallocated objects appear reachable to Valgrind.
434 - Drop const from malloc_usable_size() argument on Linux.
435
436 Bug fixes:
437 - Fix heap profiling crash if sampled object is freed via realloc(p, 0).
438 - Remove const from __*_hook variable declarations, so that glibc can modify
439 them during process forking.
440 - Fix mlockall(2)/madvise(2) interaction.
441 - Fix fork(2)-related deadlocks.
442 - Fix error return value for "thread.tcache.enabled" mallctl.
443
444* 3.0.0 (May 11, 2012)
445
446 Although this version adds some major new features, the primary focus is on
447 internal code cleanup that facilitates maintainability and portability, most
448 of which is not reflected in the ChangeLog. This is the first release to
449 incorporate substantial contributions from numerous other developers, and the
450 result is a more broadly useful allocator (see the git revision history for
451 contribution details). Note that the license has been unified, thanks to
452 Facebook granting a license under the same terms as the other copyright
453 holders (see COPYING).
454
455 New features:
456 - Implement Valgrind support, redzones, and quarantine.
457 - Add support for additional platforms:
458 + FreeBSD
459 + Mac OS X Lion
460 + MinGW
461 + Windows (no support yet for replacing the system malloc)
462 - Add support for additional architectures:
463 + MIPS
464 + SH4
465 + Tilera
466 - Add support for cross compiling.
467 - Add nallocm(), which rounds a request size up to the nearest size class
468 without actually allocating.
469 - Implement aligned_alloc() (blame C11).
470 - Add the "thread.tcache.enabled" mallctl.
471 - Add the "opt.prof_final" mallctl.
472 - Update pprof (from gperftools 2.0).
473 - Add the --with-mangling option.
474 - Add the --disable-experimental option.
475 - Add the --disable-munmap option, and make it the default on Linux.
476 - Add the --enable-mremap option, which disables use of mremap(2) by default.
477
478 Incompatible changes:
479 - Enable stats by default.
480 - Enable fill by default.
481 - Disable lazy locking by default.
482 - Rename the "tcache.flush" mallctl to "thread.tcache.flush".
483 - Rename the "arenas.pagesize" mallctl to "arenas.page".
484 - Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
485 - Change the "opt.prof_accum" default from true to false.
486
487 Removed features:
488 - Remove the swap feature, including the "config.swap", "swap.avail",
489 "swap.prezeroed", "swap.nfds", and "swap.fds" mallctls.
490 - Remove highruns statistics, including the
491 "stats.arenas.<i>.bins.<j>.highruns" and
492 "stats.arenas.<i>.lruns.<j>.highruns" mallctls.
493 - As part of small size class refactoring, remove the "opt.lg_[qc]space_max",
494 "arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and
495 "arenas.[tqcs]bins" mallctls.
496 - Remove the "arenas.chunksize" mallctl.
497 - Remove the "opt.lg_prof_tcmax" option.
498 - Remove the "opt.lg_prof_bt_max" option.
499 - Remove the "opt.lg_tcache_gc_sweep" option.
500 - Remove the --disable-tiny option, including the "config.tiny" mallctl.
501 - Remove the --enable-dynamic-page-shift configure option.
502 - Remove the --enable-sysv configure option.
503
504 Bug fixes:
505 - Fix a statistics-related bug in the "thread.arena" mallctl that could cause
506 invalid statistics and crashes.
507 - Work around TLS deallocation via free() on Linux. This bug could cause
508 write-after-free memory corruption.
509 - Fix a potential deadlock that could occur during interval- and
510 growth-triggered heap profile dumps.
511 - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags.
512 - Fix chunk_alloc_dss() to stop claiming memory is zeroed. This bug could
513 cause memory corruption and crashes with --enable-dss specified.
514 - Fix fork-related bugs that could cause deadlock in children between fork
515 and exec.
516 - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.
517 - Fix realloc(p, 0) to act like free(p).
518 - Do not enforce minimum alignment in memalign().
519 - Check for NULL pointer in malloc_usable_size().
520 - Fix an off-by-one heap profile statistics bug that could be observed in
521 interval- and growth-triggered heap profiles.
522 - Fix the "epoch" mallctl to update cached stats even if the passed in epoch
523 is 0.
524 - Fix bin->runcur management to fix a layout policy bug. This bug did not
525 affect correctness.
526 - Fix a bug in choose_arena_hard() that potentially caused more arenas to be
527 initialized than necessary.
528 - Add missing "opt.lg_tcache_max" mallctl implementation.
529 - Use glibc allocator hooks to make mixed allocator usage less likely.
530 - Fix build issues for --disable-tcache.
531 - Don't mangle pthread_create() when --with-private-namespace is specified.
532
533* 2.2.5 (November 14, 2011)
534
535 Bug fixes:
536 - Fix huge_ralloc() race when using mremap(2). This is a serious bug that
537 could cause memory corruption and/or crashes.
538 - Fix huge_ralloc() to maintain chunk statistics.
539 - Fix malloc_stats_print(..., "a") output.
540
541* 2.2.4 (November 5, 2011)
542
543 Bug fixes:
544 - Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as
545 well as for --disable-tls builds in earlier releases.
546 - Do not assume a 4 KiB page size in test/rallocm.c.
547
548* 2.2.3 (August 31, 2011)
549
550 This version fixes numerous bugs related to heap profiling.
551
552 Bug fixes:
553 - Fix a prof-related race condition. This bug could cause memory corruption,
554 but only occurred in non-default configurations (prof_accum:false).
555 - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is
556 excluded from backtraces).
557 - Fix a prof-related bug in realloc() (only triggered by OOM errors).
558 - Fix prof-related bugs in allocm() and rallocm().
559 - Fix prof_tdata_cleanup() for --disable-tls builds.
560 - Fix a relative include path, to fix objdir builds.
561
562* 2.2.2 (July 30, 2011)
563
564 Bug fixes:
565 - Fix a build error for --disable-tcache.
566 - Fix assertions in arena_purge() (for real this time).
567 - Add the --with-private-namespace option. This is a workaround for symbol
568 conflicts that can inadvertently arise when using static libraries.
569
570* 2.2.1 (March 30, 2011)
571
572 Bug fixes:
573 - Implement atomic operations for x86/x64. This fixes compilation failures
574 for versions of gcc that are still in wide use.
575 - Fix an assertion in arena_purge().
576
577* 2.2.0 (March 22, 2011)
578
579 This version incorporates several improvements to algorithms and data
580 structures that tend to reduce fragmentation and increase speed.
581
582 New features:
583 - Add the "stats.cactive" mallctl.
584 - Update pprof (from google-perftools 1.7).
585 - Improve backtracing-related configuration logic, and add the
586 --disable-prof-libgcc option.
587
588 Bug fixes:
589 - Change default symbol visibility from "internal", to "hidden", which
590 decreases the overhead of library-internal function calls.
591 - Fix symbol visibility so that it is also set on OS X.
592 - Fix a build dependency regression caused by the introduction of the .pic.o
593 suffix for PIC object files.
594 - Add missing checks for mutex initialization failures.
595 - Don't use libgcc-based backtracing except on x64, where it is known to work.
596 - Fix deadlocks on OS X that were due to memory allocation in
597 pthread_mutex_lock().
598 - Heap profiling-specific fixes:
599 + Fix memory corruption due to integer overflow in small region index
600 computation, when using a small enough sample interval that profiling
601 context pointers are stored in small run headers.
602 + Fix a bootstrap ordering bug that only occurred with TLS disabled.
603 + Fix a rallocm() rsize bug.
604 + Fix error detection bugs for aligned memory allocation.
605
606* 2.1.3 (March 14, 2011)
607
608 Bug fixes:
609 - Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix
610 for OS X in 2.1.2).
611 - Fix a "thread.arena" mallctl bug.
612 - Fix a thread cache stats merging bug.
613
614* 2.1.2 (March 2, 2011)
615
616 Bug fixes:
617 - Fix "thread.{de,}allocatedp" mallctl for OS X.
618 - Add missing jemalloc.a to build system.
619
620* 2.1.1 (January 31, 2011)
621
622 Bug fixes:
623 - Fix aligned huge reallocation (affected allocm()).
624 - Fix the ALLOCM_LG_ALIGN macro definition.
625 - Fix a heap dumping deadlock.
626 - Fix a "thread.arena" mallctl bug.
627
628* 2.1.0 (December 3, 2010)
629
630 This version incorporates some optimizations that can't quite be considered
631 bug fixes.
632
633 New features:
634 - Use Linux's mremap(2) for huge object reallocation when possible.
635 - Avoid locking in mallctl*() when possible.
636 - Add the "thread.[de]allocatedp" mallctl's.
637 - Convert the manual page source from roff to DocBook, and generate both roff
638 and HTML manuals.
639
640 Bug fixes:
641 - Fix a crash due to incorrect bootstrap ordering. This only impacted
642 --enable-debug --enable-dss configurations.
643 - Fix a minor statistics bug for mallctl("swap.avail", ...).
644
645* 2.0.1 (October 29, 2010)
646
647 Bug fixes:
648 - Fix a race condition in heap profiling that could cause undefined behavior
649 if "opt.prof_accum" were disabled.
650 - Add missing mutex unlocks for some OOM error paths in the heap profiling
651 code.
652 - Fix a compilation error for non-C99 builds.
653
654* 2.0.0 (October 24, 2010)
655
656 This version focuses on the experimental *allocm() API, and on improved
657 run-time configuration/introspection. Nonetheless, numerous performance
658 improvements are also included.
659
660 New features:
661 - Implement the experimental {,r,s,d}allocm() API, which provides a superset
662 of the functionality available via malloc(), calloc(), posix_memalign(),
663 realloc(), malloc_usable_size(), and free(). These functions can be used to
664 allocate/reallocate aligned zeroed memory, ask for optional extra memory
665 during reallocation, prevent object movement during reallocation, etc.
666 - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is
667 more human-readable, and more flexible. For example:
668 JEMALLOC_OPTIONS=AJP
669 is now:
670 MALLOC_CONF=abort:true,fill:true,stats_print:true
671 - Port to Apple OS X. Sponsored by Mozilla.
672 - Make it possible for the application to control thread-->arena mappings via
673 the "thread.arena" mallctl.
674 - Add compile-time support for all TLS-related functionality via pthreads TSD.
675 This is mainly of interest for OS X, which does not support TLS, but has a
676 TSD implementation with similar performance.
677 - Override memalign() and valloc() if they are provided by the system.
678 - Add the "arenas.purge" mallctl, which can be used to synchronously purge all
679 dirty unused pages.
680 - Make cumulative heap profiling data optional, so that it is possible to
681 limit the amount of memory consumed by heap profiling data structures.
682 - Add per thread allocation counters that can be accessed via the
683 "thread.allocated" and "thread.deallocated" mallctls.
684
685 Incompatible changes:
686 - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above).
687 - Increase default backtrace depth from 4 to 128 for heap profiling.
688 - Disable interval-based profile dumps by default.
689
690 Bug fixes:
691 - Remove bad assertions in fork handler functions. These assertions could
692 cause aborts for some combinations of configure settings.
693 - Fix strerror_r() usage to deal with non-standard semantics in GNU libc.
694 - Fix leak context reporting. This bug tended to cause the number of contexts
695 to be underreported (though the reported number of objects and bytes were
696 correct).
697 - Fix a realloc() bug for large in-place growing reallocation. This bug could
698 cause memory corruption, but it was hard to trigger.
699 - Fix an allocation bug for small allocations that could be triggered if
700 multiple threads raced to create a new run of backing pages.
701 - Enhance the heap profiler to trigger samples based on usable size, rather
702 than request size.
703 - Fix a heap profiling bug due to sometimes losing track of requested object
704 size for sampled objects.
705
706* 1.0.3 (August 12, 2010)
707
708 Bug fixes:
709 - Fix the libunwind-based implementation of stack backtracing (used for heap
710 profiling). This bug could cause zero-length backtraces to be reported.
711 - Add a missing mutex unlock in library initialization code. If multiple
712 threads raced to initialize malloc, some of them could end up permanently
713 blocked.
714
715* 1.0.2 (May 11, 2010)
716
717 Bug fixes:
718 - Fix junk filling of large objects, which could cause memory corruption.
719 - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual
720 memory limits could cause swap file configuration to fail. Contributed by
721 Jordan DeLong.
722
723* 1.0.1 (April 14, 2010)
724
725 Bug fixes:
726 - Fix compilation when --enable-fill is specified.
727 - Fix threads-related profiling bugs that affected accuracy and caused memory
728 to be leaked during thread exit.
729 - Fix dirty page purging race conditions that could cause crashes.
730 - Fix crash in tcache flushing code during thread destruction.
731
732* 1.0.0 (April 11, 2010)
733
734 This release focuses on speed and run-time introspection. Numerous
735 algorithmic improvements make this release substantially faster than its
736 predecessors.
737
738 New features:
739 - Implement autoconf-based configuration system.
740 - Add mallctl*(), for the purposes of introspection and run-time
741 configuration.
742 - Make it possible for the application to manually flush a thread's cache, via
743 the "tcache.flush" mallctl.
744 - Base maximum dirty page count on proportion of active memory.
745 - Compute various additional run-time statistics, including per size class
746 statistics for large objects.
747 - Expose malloc_stats_print(), which can be called repeatedly by the
748 application.
749 - Simplify the malloc_message() signature to only take one string argument,
750 and incorporate an opaque data pointer argument for use by the application
751 in combination with malloc_stats_print().
752 - Add support for allocation backed by one or more swap files, and allow the
753 application to disable over-commit if swap files are in use.
754 - Implement allocation profiling and leak checking.
755
756 Removed features:
757 - Remove the dynamic arena rebalancing code, since thread-specific caching
758 reduces its utility.
759
760 Bug fixes:
761 - Modify chunk allocation to work when address space layout randomization
762 (ASLR) is in use.
763 - Fix thread cleanup bugs related to TLS destruction.
764 - Handle 0-size allocation requests in posix_memalign().
765 - Fix a chunk leak. The leaked chunks were never touched, so this impacted
766 virtual memory usage, but not physical memory usage.
767
768* linux_2008082[78]a (August 27/28, 2008)
769
770 These snapshot releases are the simple result of incorporating Linux-specific
771 support into the FreeBSD malloc sources.
772
773--------------------------------------------------------------------------------
774vim:filetype=text:textwidth=80