Deleted Added
full compact
1.t (21673) 1.t (22818)
1.\" Copyright (c) 1986 The Regents of the University of California.
2.\" All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\" 1. Redistributions of source code must retain the above copyright
8.\" notice, this list of conditions and the following disclaimer.

--- 16 unchanged lines hidden (view full) ---

25.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
26.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
27.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
28.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
29.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
30.\" SUCH DAMAGE.
31.\"
32.\" @(#)1.t 5.1 (Berkeley) 4/16/91
1.\" Copyright (c) 1986 The Regents of the University of California.
2.\" All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\" 1. Redistributions of source code must retain the above copyright
8.\" notice, this list of conditions and the following disclaimer.

--- 16 unchanged lines hidden (view full) ---

25.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
26.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
27.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
28.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
29.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
30.\" SUCH DAMAGE.
31.\"
32.\" @(#)1.t 5.1 (Berkeley) 4/16/91
33.\" $FreeBSD: head/share/doc/papers/newvm/1.t 21673 1997-01-14 07:20:47Z jkh $
33.\" $FreeBSD: head/share/doc/papers/newvm/1.t 22818 1997-02-17 00:07:54Z wosch $
34.\"
35.NH
36Motivations for a New Virtual Memory System
37.PP
38The virtual memory system distributed with Berkeley UNIX has served
39its design goals admirably well over the ten years of its existence.
40However the relentless advance of technology has begun to render it
41obsolete.
42This section of the paper describes the current design,
43points out the current technological trends,
44and attempts to define the new design considerations that should
45be taken into account in a new virtual memory design.
34.\"
35.NH
36Motivations for a New Virtual Memory System
37.PP
38The virtual memory system distributed with Berkeley UNIX has served
39its design goals admirably well over the ten years of its existence.
40However the relentless advance of technology has begun to render it
41obsolete.
42This section of the paper describes the current design,
43points out the current technological trends,
44and attempts to define the new design considerations that should
45be taken into account in a new virtual memory design.
46.SH
46.NH 2
47Implementation of 4.3BSD virtual memory
48.PP
49All Berkeley Software Distributions through 4.3BSD
50have used the same virtual memory design.
51All processes, whether active or sleeping, have some amount of
52virtual address space associated with them.
53This virtual address space
54is the combination of the amount of address space with which they initially

--- 11 unchanged lines hidden (view full) ---

66a new page is allocated and filled either with initialized data or
67zeros (for new stack and break pages).
68As the supply of free pages begins to run out, dirty pages are
69pushed to the previously allocated swap space so that they can be reused
70to contain newly faulted pages.
71If a previously accessed page that has been pushed to swap is once
72again used, a free page is reallocated and filled from the swap area
73[Babaoglu79], [Someren84].
47Implementation of 4.3BSD virtual memory
48.PP
49All Berkeley Software Distributions through 4.3BSD
50have used the same virtual memory design.
51All processes, whether active or sleeping, have some amount of
52virtual address space associated with them.
53This virtual address space
54is the combination of the amount of address space with which they initially

--- 11 unchanged lines hidden (view full) ---

66a new page is allocated and filled either with initialized data or
67zeros (for new stack and break pages).
68As the supply of free pages begins to run out, dirty pages are
69pushed to the previously allocated swap space so that they can be reused
70to contain newly faulted pages.
71If a previously accessed page that has been pushed to swap is once
72again used, a free page is reallocated and filled from the swap area
73[Babaoglu79], [Someren84].
74.SH
74.NH 2
75Design assumptions for 4.3BSD virtual memory
76.PP
77The design criteria for the current virtual memory implementation
78were made in 1979.
79At that time the cost of memory was about a thousand times greater per
80byte than magnetic disks.
81Most machines were used as centralized time sharing machines.
82These machines had far more disk storage than they had memory

--- 22 unchanged lines hidden (view full) ---

105directly connected.
106Thus the speed and latency with which file systems could be accessed
107were roughly equivalent to the speed and latency with which swap
108space could be accessed.
109Given the high cost of memory there was little incentive to have
110the kernel keep track of the contents of the swap area once a process
111exited since it could almost as easily and quickly be reread from the
112file system.
75Design assumptions for 4.3BSD virtual memory
76.PP
77The design criteria for the current virtual memory implementation
78were made in 1979.
79At that time the cost of memory was about a thousand times greater per
80byte than magnetic disks.
81Most machines were used as centralized time sharing machines.
82These machines had far more disk storage than they had memory

--- 22 unchanged lines hidden (view full) ---

105directly connected.
106Thus the speed and latency with which file systems could be accessed
107were roughly equivalent to the speed and latency with which swap
108space could be accessed.
109Given the high cost of memory there was little incentive to have
110the kernel keep track of the contents of the swap area once a process
111exited since it could almost as easily and quickly be reread from the
112file system.
113.SH
113.NH 2
114New influences
115.PP
116In the ten years since the current virtual memory system was designed,
117many technological advances have occurred.
118One effect of the technological revolution is that the
119micro-processor has become powerful enough to allow users to have their
120own personal workstations.
121Thus the computing environment is moving away from a purely centralized

--- 63 unchanged lines hidden (view full) ---

185file server in a timely fashion, thus eliminating the need to dump
186the local disk or push the files manually.
187.NH
188User Interface
189.PP
190This section outlines our new virtual memory interface as it is
191currently envisioned.
192The details of the system call interface are contained in Appendix A.
114New influences
115.PP
116In the ten years since the current virtual memory system was designed,
117many technological advances have occurred.
118One effect of the technological revolution is that the
119micro-processor has become powerful enough to allow users to have their
120own personal workstations.
121Thus the computing environment is moving away from a purely centralized

--- 63 unchanged lines hidden (view full) ---

185file server in a timely fashion, thus eliminating the need to dump
186the local disk or push the files manually.
187.NH
188User Interface
189.PP
190This section outlines our new virtual memory interface as it is
191currently envisioned.
192The details of the system call interface are contained in Appendix A.
193.SH
193.NH 2
194Regions
195.PP
196The virtual memory interface is designed to support both large,
197sparse address spaces as well as small, densely-used address spaces.
198In this context, ``small'' is an address space roughly the
199size of the physical memory on the machine,
200while ``large'' may extend up to the maximum addressability of the machine.
201A process may divide its address space up into a number of regions.

--- 52 unchanged lines hidden (view full) ---

254nor be willing to pay the overhead associated with them.
255For anonymous memory they must use some other rendezvous point.
256Our current interface allows processes to associate a
257descriptor with a region, which it may then pass to other
258processes that wish to attach to the region.
259Such a descriptor may be bound into the UNIX file system
260name space so that other processes can find it just as
261they would with a mapped file.
194Regions
195.PP
196The virtual memory interface is designed to support both large,
197sparse address spaces as well as small, densely-used address spaces.
198In this context, ``small'' is an address space roughly the
199size of the physical memory on the machine,
200while ``large'' may extend up to the maximum addressability of the machine.
201A process may divide its address space up into a number of regions.

--- 52 unchanged lines hidden (view full) ---

254nor be willing to pay the overhead associated with them.
255For anonymous memory they must use some other rendezvous point.
256Our current interface allows processes to associate a
257descriptor with a region, which it may then pass to other
258processes that wish to attach to the region.
259Such a descriptor may be bound into the UNIX file system
260name space so that other processes can find it just as
261they would with a mapped file.
262.SH
262.NH 2
263Shared memory as high speed interprocess communication
264.PP
265The primary use envisioned for shared memory is to
266provide a high speed interprocess communication (IPC) mechanism
267between cooperating processes.
268Existing IPC mechanisms (\fIi.e.\fP pipes, sockets, or streams)
269require a system call to hand off a set
270of data destined for another process, and another system call

--- 108 unchanged lines hidden ---
263Shared memory as high speed interprocess communication
264.PP
265The primary use envisioned for shared memory is to
266provide a high speed interprocess communication (IPC) mechanism
267between cooperating processes.
268Existing IPC mechanisms (\fIi.e.\fP pipes, sockets, or streams)
269require a system call to hand off a set
270of data destined for another process, and another system call

--- 108 unchanged lines hidden ---