Searched hist:430 (Results 1 - 15 of 15) sorted by relevance

/freebsd-10.3-release/lib/libarchive/
H A Dconfig_freebsd.hdiff 189429 Fri Mar 06 04:39:48 MST 2009 kientzle Merge r399,401,402,405,415,430,440,452,453,458,506,533,536,538,544,590
from libarchive.googlecode.com: Add a new "archive_read_disk" API
that provides the important service of reading metadata from the
disk. In particular, this will make it possible to remove all
knowledge of extended attributes, ACLs, etc, from clients such
as bsdtar and bsdcpio.

Closely related, this API also provides pluggable uid->uname
and gid->gname lookup and caching services similar to
the uname->uid and gname->gid services provided by archive_write_disk.
Remember this is also required for correct ACL management.

Documentation is still pending...
H A DMakefilediff 189429 Fri Mar 06 04:39:48 MST 2009 kientzle Merge r399,401,402,405,415,430,440,452,453,458,506,533,536,538,544,590
from libarchive.googlecode.com: Add a new "archive_read_disk" API
that provides the important service of reading metadata from the
disk. In particular, this will make it possible to remove all
knowledge of extended attributes, ACLs, etc, from clients such
as bsdtar and bsdcpio.

Closely related, this API also provides pluggable uid->uname
and gid->gname lookup and caching services similar to
the uname->uid and gname->gid services provided by archive_write_disk.
Remember this is also required for correct ACL management.

Documentation is still pending...
/freebsd-10.3-release/sys/ddb/
H A Ddb_output.cdiff 430 Thu Sep 09 23:03:24 MDT 1993 rgrimes Moved db_end_line after db_printf to eliminate forward reference and
shut up the compiler about prototype mismatch.
/freebsd-10.3-release/lib/libarchive/tests/
H A DMakefilediff 189429 Fri Mar 06 04:39:48 MST 2009 kientzle Merge r399,401,402,405,415,430,440,452,453,458,506,533,536,538,544,590
from libarchive.googlecode.com: Add a new "archive_read_disk" API
that provides the important service of reading metadata from the
disk. In particular, this will make it possible to remove all
knowledge of extended attributes, ACLs, etc, from clients such
as bsdtar and bsdcpio.

Closely related, this API also provides pluggable uid->uname
and gid->gname lookup and caching services similar to
the uname->uid and gname->gid services provided by archive_write_disk.
Remember this is also required for correct ACL management.

Documentation is still pending...
/freebsd-10.3-release/sys/sys/
H A Deventhandler.hdiff 243631 Tue Nov 27 21:36:13 MST 2012 andre Base the mbuf related limits on the available physical memory or
kernel memory, whichever is lower. The overall mbuf related memory
limit must be set so that mbufs (and clusters of various sizes)
can't exhaust physical RAM or KVM.

The limit is set to half of the physical RAM or KVM (whichever is
lower) as the baseline. In any normal scenario we want to leave
at least half of the physmem/kvm for other kernel functions and
userspace to prevent it from swapping too easily. Via a tunable
kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm.

At the same time divorce maxfiles from maxusers and set maxfiles to
physpages / 8 with a floor based on maxusers. This way busy servers
can make use of the significantly increased mbuf limits with a much
larger number of open sockets.

Tidy up ordering in init_param2() and check up on some users of
those values calculated here.

Out of the overall mbuf memory limit 2K clusters and 4K (page size)
clusters to get 1/4 each because these are the most heavily used mbuf
sizes. 2K clusters are used for MTU 1500 ethernet inbound packets.
4K clusters are used whenever possible for sends on sockets and thus
outbound packets. The larger cluster sizes of 9K and 16K are limited
to 1/6 of the overall mbuf memory limit. When jumbo MTU's are used
these large clusters will end up only on the inbound path. They are
not used on outbound, there it's still 4K. Yes, that will stay that
way because otherwise we run into lots of complications in the
stack. And it really isn't a problem, so don't make a scene.

Normal mbufs (256B) weren't limited at all previously. This was
problematic as there are certain places in the kernel that on
allocation failure of clusters try to piece together their packet
from smaller mbufs.

The mbuf limit is the number of all other mbuf sizes together plus
some more to allow for standalone mbufs (ACK for example) and to
send off a copy of a cluster. Unfortunately there isn't a way to
set an overall limit for all mbuf memory together as UMA doesn't
support such a limiting.

NB: Every cluster also has an mbuf associated with it.

Two examples on the revised mbuf sizing limits:

1GB KVM:
512MB limit for mbufs
419,430 mbufs
65,536 2K mbuf clusters
32,768 4K mbuf clusters
9,709 9K mbuf clusters
5,461 16K mbuf clusters

16GB RAM:
8GB limit for mbufs
33,554,432 mbufs
1,048,576 2K mbuf clusters
524,288 4K mbuf clusters
155,344 9K mbuf clusters
87,381 16K mbuf clusters

These defaults should be sufficient for even the most demanding
network loads.

MFC after: 1 month
H A Dmbuf.hdiff 243631 Tue Nov 27 21:36:13 MST 2012 andre Base the mbuf related limits on the available physical memory or
kernel memory, whichever is lower. The overall mbuf related memory
limit must be set so that mbufs (and clusters of various sizes)
can't exhaust physical RAM or KVM.

The limit is set to half of the physical RAM or KVM (whichever is
lower) as the baseline. In any normal scenario we want to leave
at least half of the physmem/kvm for other kernel functions and
userspace to prevent it from swapping too easily. Via a tunable
kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm.

At the same time divorce maxfiles from maxusers and set maxfiles to
physpages / 8 with a floor based on maxusers. This way busy servers
can make use of the significantly increased mbuf limits with a much
larger number of open sockets.

Tidy up ordering in init_param2() and check up on some users of
those values calculated here.

Out of the overall mbuf memory limit 2K clusters and 4K (page size)
clusters to get 1/4 each because these are the most heavily used mbuf
sizes. 2K clusters are used for MTU 1500 ethernet inbound packets.
4K clusters are used whenever possible for sends on sockets and thus
outbound packets. The larger cluster sizes of 9K and 16K are limited
to 1/6 of the overall mbuf memory limit. When jumbo MTU's are used
these large clusters will end up only on the inbound path. They are
not used on outbound, there it's still 4K. Yes, that will stay that
way because otherwise we run into lots of complications in the
stack. And it really isn't a problem, so don't make a scene.

Normal mbufs (256B) weren't limited at all previously. This was
problematic as there are certain places in the kernel that on
allocation failure of clusters try to piece together their packet
from smaller mbufs.

The mbuf limit is the number of all other mbuf sizes together plus
some more to allow for standalone mbufs (ACK for example) and to
send off a copy of a cluster. Unfortunately there isn't a way to
set an overall limit for all mbuf memory together as UMA doesn't
support such a limiting.

NB: Every cluster also has an mbuf associated with it.

Two examples on the revised mbuf sizing limits:

1GB KVM:
512MB limit for mbufs
419,430 mbufs
65,536 2K mbuf clusters
32,768 4K mbuf clusters
9,709 9K mbuf clusters
5,461 16K mbuf clusters

16GB RAM:
8GB limit for mbufs
33,554,432 mbufs
1,048,576 2K mbuf clusters
524,288 4K mbuf clusters
155,344 9K mbuf clusters
87,381 16K mbuf clusters

These defaults should be sufficient for even the most demanding
network loads.

MFC after: 1 month
/freebsd-10.3-release/share/misc/
H A Dbsd-family-treediff 59156 Tue Apr 11 21:02:24 MDT 2000 wosch The latest patchlevel of 2.11BSD is 430

Submitted by: Victor.Langeveld@mbfys.kun.nl
/freebsd-10.3-release/sys/kern/
H A Dkern_mbuf.cdiff 243631 Tue Nov 27 21:36:13 MST 2012 andre Base the mbuf related limits on the available physical memory or
kernel memory, whichever is lower. The overall mbuf related memory
limit must be set so that mbufs (and clusters of various sizes)
can't exhaust physical RAM or KVM.

The limit is set to half of the physical RAM or KVM (whichever is
lower) as the baseline. In any normal scenario we want to leave
at least half of the physmem/kvm for other kernel functions and
userspace to prevent it from swapping too easily. Via a tunable
kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm.

At the same time divorce maxfiles from maxusers and set maxfiles to
physpages / 8 with a floor based on maxusers. This way busy servers
can make use of the significantly increased mbuf limits with a much
larger number of open sockets.

Tidy up ordering in init_param2() and check up on some users of
those values calculated here.

Out of the overall mbuf memory limit 2K clusters and 4K (page size)
clusters to get 1/4 each because these are the most heavily used mbuf
sizes. 2K clusters are used for MTU 1500 ethernet inbound packets.
4K clusters are used whenever possible for sends on sockets and thus
outbound packets. The larger cluster sizes of 9K and 16K are limited
to 1/6 of the overall mbuf memory limit. When jumbo MTU's are used
these large clusters will end up only on the inbound path. They are
not used on outbound, there it's still 4K. Yes, that will stay that
way because otherwise we run into lots of complications in the
stack. And it really isn't a problem, so don't make a scene.

Normal mbufs (256B) weren't limited at all previously. This was
problematic as there are certain places in the kernel that on
allocation failure of clusters try to piece together their packet
from smaller mbufs.

The mbuf limit is the number of all other mbuf sizes together plus
some more to allow for standalone mbufs (ACK for example) and to
send off a copy of a cluster. Unfortunately there isn't a way to
set an overall limit for all mbuf memory together as UMA doesn't
support such a limiting.

NB: Every cluster also has an mbuf associated with it.

Two examples on the revised mbuf sizing limits:

1GB KVM:
512MB limit for mbufs
419,430 mbufs
65,536 2K mbuf clusters
32,768 4K mbuf clusters
9,709 9K mbuf clusters
5,461 16K mbuf clusters

16GB RAM:
8GB limit for mbufs
33,554,432 mbufs
1,048,576 2K mbuf clusters
524,288 4K mbuf clusters
155,344 9K mbuf clusters
87,381 16K mbuf clusters

These defaults should be sufficient for even the most demanding
network loads.

MFC after: 1 month
H A Dsubr_param.cdiff 243631 Tue Nov 27 21:36:13 MST 2012 andre Base the mbuf related limits on the available physical memory or
kernel memory, whichever is lower. The overall mbuf related memory
limit must be set so that mbufs (and clusters of various sizes)
can't exhaust physical RAM or KVM.

The limit is set to half of the physical RAM or KVM (whichever is
lower) as the baseline. In any normal scenario we want to leave
at least half of the physmem/kvm for other kernel functions and
userspace to prevent it from swapping too easily. Via a tunable
kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm.

At the same time divorce maxfiles from maxusers and set maxfiles to
physpages / 8 with a floor based on maxusers. This way busy servers
can make use of the significantly increased mbuf limits with a much
larger number of open sockets.

Tidy up ordering in init_param2() and check up on some users of
those values calculated here.

Out of the overall mbuf memory limit 2K clusters and 4K (page size)
clusters to get 1/4 each because these are the most heavily used mbuf
sizes. 2K clusters are used for MTU 1500 ethernet inbound packets.
4K clusters are used whenever possible for sends on sockets and thus
outbound packets. The larger cluster sizes of 9K and 16K are limited
to 1/6 of the overall mbuf memory limit. When jumbo MTU's are used
these large clusters will end up only on the inbound path. They are
not used on outbound, there it's still 4K. Yes, that will stay that
way because otherwise we run into lots of complications in the
stack. And it really isn't a problem, so don't make a scene.

Normal mbufs (256B) weren't limited at all previously. This was
problematic as there are certain places in the kernel that on
allocation failure of clusters try to piece together their packet
from smaller mbufs.

The mbuf limit is the number of all other mbuf sizes together plus
some more to allow for standalone mbufs (ACK for example) and to
send off a copy of a cluster. Unfortunately there isn't a way to
set an overall limit for all mbuf memory together as UMA doesn't
support such a limiting.

NB: Every cluster also has an mbuf associated with it.

Two examples on the revised mbuf sizing limits:

1GB KVM:
512MB limit for mbufs
419,430 mbufs
65,536 2K mbuf clusters
32,768 4K mbuf clusters
9,709 9K mbuf clusters
5,461 16K mbuf clusters

16GB RAM:
8GB limit for mbufs
33,554,432 mbufs
1,048,576 2K mbuf clusters
524,288 4K mbuf clusters
155,344 9K mbuf clusters
87,381 16K mbuf clusters

These defaults should be sufficient for even the most demanding
network loads.

MFC after: 1 month
H A Duipc_socket.cdiff 243631 Tue Nov 27 21:36:13 MST 2012 andre Base the mbuf related limits on the available physical memory or
kernel memory, whichever is lower. The overall mbuf related memory
limit must be set so that mbufs (and clusters of various sizes)
can't exhaust physical RAM or KVM.

The limit is set to half of the physical RAM or KVM (whichever is
lower) as the baseline. In any normal scenario we want to leave
at least half of the physmem/kvm for other kernel functions and
userspace to prevent it from swapping too easily. Via a tunable
kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm.

At the same time divorce maxfiles from maxusers and set maxfiles to
physpages / 8 with a floor based on maxusers. This way busy servers
can make use of the significantly increased mbuf limits with a much
larger number of open sockets.

Tidy up ordering in init_param2() and check up on some users of
those values calculated here.

Out of the overall mbuf memory limit 2K clusters and 4K (page size)
clusters to get 1/4 each because these are the most heavily used mbuf
sizes. 2K clusters are used for MTU 1500 ethernet inbound packets.
4K clusters are used whenever possible for sends on sockets and thus
outbound packets. The larger cluster sizes of 9K and 16K are limited
to 1/6 of the overall mbuf memory limit. When jumbo MTU's are used
these large clusters will end up only on the inbound path. They are
not used on outbound, there it's still 4K. Yes, that will stay that
way because otherwise we run into lots of complications in the
stack. And it really isn't a problem, so don't make a scene.

Normal mbufs (256B) weren't limited at all previously. This was
problematic as there are certain places in the kernel that on
allocation failure of clusters try to piece together their packet
from smaller mbufs.

The mbuf limit is the number of all other mbuf sizes together plus
some more to allow for standalone mbufs (ACK for example) and to
send off a copy of a cluster. Unfortunately there isn't a way to
set an overall limit for all mbuf memory together as UMA doesn't
support such a limiting.

NB: Every cluster also has an mbuf associated with it.

Two examples on the revised mbuf sizing limits:

1GB KVM:
512MB limit for mbufs
419,430 mbufs
65,536 2K mbuf clusters
32,768 4K mbuf clusters
9,709 9K mbuf clusters
5,461 16K mbuf clusters

16GB RAM:
8GB limit for mbufs
33,554,432 mbufs
1,048,576 2K mbuf clusters
524,288 4K mbuf clusters
155,344 9K mbuf clusters
87,381 16K mbuf clusters

These defaults should be sufficient for even the most demanding
network loads.

MFC after: 1 month
/freebsd-10.3-release/sys/dev/sound/pci/hda/
H A Dhdac.cdiff 170944 Mon Jun 18 22:39:27 MDT 2007 ariff Fix headphone jack sensing support for Olivetti Olibook 610-430 XPSE.

Tested by: Gonzalo Lionel Rodriguez
/freebsd-10.3-release/sys/dev/bktr/
H A Dbktr_core.cdiff 47491 Tue May 25 12:43:40 MDT 1999 roger Add support for the Bt878/Bt879's Intel 430 FX and
SIS/VIA/ OPTi chipset PCI bus workarounds.

These make the Bt878/879 chips stabler on certain
older and non-intel motherboards.

Use options BKTR_430_FX_MODE
or options BKTR_SIS_VIA_MODE
to enable these modes.

Also rename 849 to 849A
/freebsd-10.3-release/sys/conf/
H A Dfiles.i386diff 27756 Tue Jul 29 12:57:25 MDT 1997 sos Add support for busmaster DMA on some PCI IDE chipsets.

I changed a few bits here and there, mainly renaming wd82371.c
to ide_pci.c now that it's supposed to handle different chipsets.

It runs on my P6 natoma board with two Maxtor drives, and also
on a Fujitsu machine I have at work with an Opti chipset and
a Quantum drive.

Submitted by:cgull@smoke.marlboro.vt.us <John Hood>

Original readme:

*** WARNING ***

This code has so far been tested on exactly one motherboard with two
identical drives known for their good DMA support.

This code, in the right circumstances, could corrupt data subtly,
silently, and invisibly, in much the same way that older PCI IDE
controllers do. It's ALPHA-quality code; there's one or two major
gaps in my understanding of PCI IDE still. Don't use this code on any
system with data that you care about; it's only good for hack boxes.
Expect that any data may be silently and randomly corrupted at any
moment. It's a disk driver. It has bugs. Disk drivers with bugs
munch data. It's a fact of life.

I also *STRONGLY* recommend getting a copy of your chipset's manual
and the ATA-2 or ATA-3 spec and making sure that timing modes on your
disk drives and IDE controller are being setup correctly by the BIOS--
because the driver makes only the lamest of attempts to do this just
now.

*** END WARNING ***

that said, i happen to think the code is working pretty well...

WHAT IT DOES:

this code adds support to the wd driver for bus mastering PCI IDE
controllers that follow the SFF-8038 standard. (all the bus mastering
PCI IDE controllers i've seen so far do follow this standard.) it
should provide busmastering on nearly any current P5 or P6 chipset,
specifically including any Intel chipset using one of the PIIX south
bridges-- this includes the '430FX, '430VX, '430HX, '430TX, '440LX,
and (i think) the Orion '450GX chipsets. specific support is also
included for the VIA Apollo VP-1 chipset, as it appears in the
relabeled "HXPro" incarnation seen on cheap US$70 taiwanese
motherboards (that's what's in my development machine). it works out
of the box on controllers that do DMA mode2; if my understanding is
correct, it'll probably work on Ultra-DMA33 controllers as well.
it'll probably work on busmastering IDE controllers in PCI slots, too,
but this is an area i am less sure about.

it cuts CPU usage considerably and improves drive performance
slightly. usable numbers are difficult to come by with existing
benchmark tools, but experimentation on my K5-P90 system, with VIA
VP-1 chipset and Quantum Fireball 1080 drives, shows that disk i/o on
raw partitions imposes perhaps 5% cpu load. cpu load during
filesystem i/o drops a lot, from near 100% to anywhere between 30% and
70%. (the improvement may not be as large on an Intel chipset; from
what i can tell, the VIA VP-1 may not be very efficient with PCI I/O.)
disk performance improves by 5% or 10% with these drives.

real, visible, end-user performance improvement on a single user
machine is about nil. :) a kernel compile was sped up by a whole three
seconds. it *does* feel a bit better-behaved when the system is
swapping heavily, but a better disk driver is not the fix for *that*
problem.

THE CODE:

this code is a patch to wd.c and wd82371.c, and associated header
files. it should be considered alpha code; more work needs to be
done.

wd.c has fairly clean patches to add calls to busmaster code, as
implemented in wd82371.c and potentially elsewhere (one could imagine,
say, a Mac having a different DMA controller).

wd82371.c has been considerably reworked: the wddma interface that it
presents has been changed (expect more changes), many bugs have been
fixed, a new internal interface has been added for supporting
different chipsets, and the PCI probe has been considerably extended.

the interface between wd82371.c and wd.c is still fairly clean, but
i'm not sure it's in the right place. there's a mess of issues around
ATA/ATAPI that need to be sorted out, including ATAPI support, CD-ROM
support, tape support, LS-120/Zip support, SFF-8038i DMA, UltraDMA,
PCI IDE controllers, bus probes, buggy controllers, controller timing
setup, drive timing setup, world peace and kitchen sinks. whatever
happens with all this and however it gets partitioned, it is fairly
clear that wd.c needs some significant rework-- probably a complete
rewrite.

timing setup on disk controllers is something i've entirely punted on.
on my development machine, it appears that the BIOS does at least some
of the necessary timing setup. i chose to restrict operation to
drives that are already configured for Mode4 PIO and Mode2 multiword
DMA, since the timing is essentially the same and many if not most
chipsets use the same control registers for DMA and PIO timing.

does anybody *know* whether BIOSes are required to do timing setup for
DMA modes on drives under their care?

error recovery is probably weak. early on in development, i was
getting drive errors induced by bugs in the driver; i used these to
flush out the worst of the bugs in the driver's error handling, but
problems may remain. i haven't got a drive with bad sectors i can
watch the driver flail on.

complaints about how wd82371.c has been reindented will be ignored
until the FreeBSD project has a real style policy, there is a
mechanism for individual authors to match it (indent flags or an emacs
c-mode or whatever), and it is enforced. if i'm going to use a source
style i don't like, it would help if i could figure out what it *is*
(style(9) is about half of a policy), and a way to reasonably
duplicate it. i ended up wasting a while trying to figure out what
the right thing to do was before deciding reformatting the whole thing
was the worst possible thing to do, except for all the other
possibilities.

i have maintained wd.c's indentation; that was not too hard,
fortunately.

TO INSTALL:

my dev box is freebsd 2.2.2 release. fortunately, wd.c is a living
fossil, and has diverged very little recently. included in this
tarball is a patch file, 'otherdiffs', for all files except wd82371.c,
my edited wd82371.c, a patch file, 'wd82371.c-diff-exact', against the
2.2.2 dist of 82371.c, and another patch file,
'wd82371.c-diff-whitespace', generated with diff -b (ignore
whitespace). most of you not using 2.2.2 will probably have to use
this last patchfile with 'patch --ignore-whitespace'. apply from the
kernel source tree root. as far as i can tell, this should apply
cleanly on anything from -current back to 2.2.2 and probably back to
2.2.0. you, the kernel hacker, can figure out what to do from here.
if you need more specific directions, you probably should not be
experimenting with this code yet.

to enable DMA support, set flag 0x2000 for that drive in your config
file or in userconfig, as you would the 32-bit-PIO flag. the driver
will then turn on DMA support if your drive and controller pass its
tests. it's a bit picky, probably. on discovering DMA mode failures
or disk errors or transfers that the DMA controller can't deal with,
the driver will fall back to PIO, so it is wise to setup the flags as
if PIO were still important.

'controller wdc0 at isa? port "IO_WD1" bio irq 14 flags 0xa0ffa0ff
vector wdintr' should work with nearly any PCI IDE controller.

i would *strongly* suggest booting single-user at first, and thrashing
the drive a bit while it's still mounted read-only. this should be
fairly safe, even if the driver goes completely out to lunch. it
might save you a reinstall.

one way to tell whether the driver is really using DMA is to check the
interrupt count during disk i/o with vmstat; DMA mode will add an
extremely low number of interrupts, as compared to even multi-sector
PIO.

boot -v will give you a copious register dump of timing-related info
on Intel and VIAtech chipsets, as well as PIO/DMA mode information on
all hard drives. refer to your ATA and chipset documentation to
interpret these.

WHAT I'D LIKE FROM YOU and THINGS TO TEST:

reports. success reports, failure reports, any kind of reports. :)
send them to cgull+ide@smoke.marlboro.vt.us.

i'd also like to see the kernel messages from various BIOSes (boot -v;
dmesg), along with info on the motherboard and BIOS on that machine.

i'm especially interested in reports on how this code works on the
various Intel chipsets, and whether the register dump works
correctly. i'm also interested in hearing about other chipsets.

i'm especially interested in hearing success/failure reports for PCI
IDE controllers on cards, such as CMD's or Promise's new busmastering
IDE controllers.

UltraDMA-33 reports.

interoperation with ATAPI peripherals-- FreeBSD doesn't work with my
old Hitachi IDE CDROM, so i can't tell if I've broken anything. :)

i'd especially like to hear how the drive copes in DMA operation on
drives with bad sectors. i haven't been able to find any such yet.

success/failure reports on older IDE drives with early support for DMA
modes-- those introduced between 1.5 and 3 years ago, typically
ranging from perhaps 400MB to 1.6GB.

failure reports on operation with more than one drive would be
appreciated. the driver was developed with two drives on one
controller, the worst-case situation, and has been tested with one
drive on each controller, but you never know...

any reports of messages from the driver during normal operation,
especially "reverting to PIO mode", or "dmaverify odd vaddr or length"
(the DMA controller is strongly halfword oriented, and i'm curious to
know if any FreeBSD usage actually needs misaligned transfers).

performance reports. beware that bonnie's CPU usage reporting is
useless for IDE drives; the best test i've found has been to run a
program that runs a spin loop at an idle priority and reports how many
iterations it manages, and even that sometimes produces numbers i
don't believe. performance reports of multi-drive operation are
especially interesting; my system cannot sustain full throughput on
two drives on separate controllers, but that may just be a lame
motherboard.

THINGS I'M STILL MISSING CLUE ON:

* who's responsible for configuring DMA timing modes on IDE drives?
the BIOS or the driver?

* is there a spec for dealing with Ultra-DMA extensions?

* are there any chipsets or with bugs relating to DMA transfer that
should be blacklisted?

* are there any ATA interfaces that use some other kind of DMA
controller in conjunction with standard ATA protocol?

FINAL NOTE:

after having looked at the ATA-3 spec, all i can say is, "it's ugly".
*especially* electrically. the IDE bus is best modeled as an
unterminated transmission line, these days.

for maximum reliability, keep your IDE cables as short as possible and
as few as possible. from what i can tell, most current chipsets have
both IDE ports wired into a single buss, to a greater or lesser
degree. using two cables means you double the length of this bus.

SCSI may have its warts, but at least the basic analog design of the
bus is still somewhat reasonable. IDE passed beyond the veil two
years ago.

--John Hood, cgull@smoke.marlboro.vt.us
diff 27756 Tue Jul 29 12:57:25 MDT 1997 sos Add support for busmaster DMA on some PCI IDE chipsets.

I changed a few bits here and there, mainly renaming wd82371.c
to ide_pci.c now that it's supposed to handle different chipsets.

It runs on my P6 natoma board with two Maxtor drives, and also
on a Fujitsu machine I have at work with an Opti chipset and
a Quantum drive.

Submitted by:cgull@smoke.marlboro.vt.us <John Hood>

Original readme:

*** WARNING ***

This code has so far been tested on exactly one motherboard with two
identical drives known for their good DMA support.

This code, in the right circumstances, could corrupt data subtly,
silently, and invisibly, in much the same way that older PCI IDE
controllers do. It's ALPHA-quality code; there's one or two major
gaps in my understanding of PCI IDE still. Don't use this code on any
system with data that you care about; it's only good for hack boxes.
Expect that any data may be silently and randomly corrupted at any
moment. It's a disk driver. It has bugs. Disk drivers with bugs
munch data. It's a fact of life.

I also *STRONGLY* recommend getting a copy of your chipset's manual
and the ATA-2 or ATA-3 spec and making sure that timing modes on your
disk drives and IDE controller are being setup correctly by the BIOS--
because the driver makes only the lamest of attempts to do this just
now.

*** END WARNING ***

that said, i happen to think the code is working pretty well...

WHAT IT DOES:

this code adds support to the wd driver for bus mastering PCI IDE
controllers that follow the SFF-8038 standard. (all the bus mastering
PCI IDE controllers i've seen so far do follow this standard.) it
should provide busmastering on nearly any current P5 or P6 chipset,
specifically including any Intel chipset using one of the PIIX south
bridges-- this includes the '430FX, '430VX, '430HX, '430TX, '440LX,
and (i think) the Orion '450GX chipsets. specific support is also
included for the VIA Apollo VP-1 chipset, as it appears in the
relabeled "HXPro" incarnation seen on cheap US$70 taiwanese
motherboards (that's what's in my development machine). it works out
of the box on controllers that do DMA mode2; if my understanding is
correct, it'll probably work on Ultra-DMA33 controllers as well.
it'll probably work on busmastering IDE controllers in PCI slots, too,
but this is an area i am less sure about.

it cuts CPU usage considerably and improves drive performance
slightly. usable numbers are difficult to come by with existing
benchmark tools, but experimentation on my K5-P90 system, with VIA
VP-1 chipset and Quantum Fireball 1080 drives, shows that disk i/o on
raw partitions imposes perhaps 5% cpu load. cpu load during
filesystem i/o drops a lot, from near 100% to anywhere between 30% and
70%. (the improvement may not be as large on an Intel chipset; from
what i can tell, the VIA VP-1 may not be very efficient with PCI I/O.)
disk performance improves by 5% or 10% with these drives.

real, visible, end-user performance improvement on a single user
machine is about nil. :) a kernel compile was sped up by a whole three
seconds. it *does* feel a bit better-behaved when the system is
swapping heavily, but a better disk driver is not the fix for *that*
problem.

THE CODE:

this code is a patch to wd.c and wd82371.c, and associated header
files. it should be considered alpha code; more work needs to be
done.

wd.c has fairly clean patches to add calls to busmaster code, as
implemented in wd82371.c and potentially elsewhere (one could imagine,
say, a Mac having a different DMA controller).

wd82371.c has been considerably reworked: the wddma interface that it
presents has been changed (expect more changes), many bugs have been
fixed, a new internal interface has been added for supporting
different chipsets, and the PCI probe has been considerably extended.

the interface between wd82371.c and wd.c is still fairly clean, but
i'm not sure it's in the right place. there's a mess of issues around
ATA/ATAPI that need to be sorted out, including ATAPI support, CD-ROM
support, tape support, LS-120/Zip support, SFF-8038i DMA, UltraDMA,
PCI IDE controllers, bus probes, buggy controllers, controller timing
setup, drive timing setup, world peace and kitchen sinks. whatever
happens with all this and however it gets partitioned, it is fairly
clear that wd.c needs some significant rework-- probably a complete
rewrite.

timing setup on disk controllers is something i've entirely punted on.
on my development machine, it appears that the BIOS does at least some
of the necessary timing setup. i chose to restrict operation to
drives that are already configured for Mode4 PIO and Mode2 multiword
DMA, since the timing is essentially the same and many if not most
chipsets use the same control registers for DMA and PIO timing.

does anybody *know* whether BIOSes are required to do timing setup for
DMA modes on drives under their care?

error recovery is probably weak. early on in development, i was
getting drive errors induced by bugs in the driver; i used these to
flush out the worst of the bugs in the driver's error handling, but
problems may remain. i haven't got a drive with bad sectors i can
watch the driver flail on.

complaints about how wd82371.c has been reindented will be ignored
until the FreeBSD project has a real style policy, there is a
mechanism for individual authors to match it (indent flags or an emacs
c-mode or whatever), and it is enforced. if i'm going to use a source
style i don't like, it would help if i could figure out what it *is*
(style(9) is about half of a policy), and a way to reasonably
duplicate it. i ended up wasting a while trying to figure out what
the right thing to do was before deciding reformatting the whole thing
was the worst possible thing to do, except for all the other
possibilities.

i have maintained wd.c's indentation; that was not too hard,
fortunately.

TO INSTALL:

my dev box is freebsd 2.2.2 release. fortunately, wd.c is a living
fossil, and has diverged very little recently. included in this
tarball is a patch file, 'otherdiffs', for all files except wd82371.c,
my edited wd82371.c, a patch file, 'wd82371.c-diff-exact', against the
2.2.2 dist of 82371.c, and another patch file,
'wd82371.c-diff-whitespace', generated with diff -b (ignore
whitespace). most of you not using 2.2.2 will probably have to use
this last patchfile with 'patch --ignore-whitespace'. apply from the
kernel source tree root. as far as i can tell, this should apply
cleanly on anything from -current back to 2.2.2 and probably back to
2.2.0. you, the kernel hacker, can figure out what to do from here.
if you need more specific directions, you probably should not be
experimenting with this code yet.

to enable DMA support, set flag 0x2000 for that drive in your config
file or in userconfig, as you would the 32-bit-PIO flag. the driver
will then turn on DMA support if your drive and controller pass its
tests. it's a bit picky, probably. on discovering DMA mode failures
or disk errors or transfers that the DMA controller can't deal with,
the driver will fall back to PIO, so it is wise to setup the flags as
if PIO were still important.

'controller wdc0 at isa? port "IO_WD1" bio irq 14 flags 0xa0ffa0ff
vector wdintr' should work with nearly any PCI IDE controller.

i would *strongly* suggest booting single-user at first, and thrashing
the drive a bit while it's still mounted read-only. this should be
fairly safe, even if the driver goes completely out to lunch. it
might save you a reinstall.

one way to tell whether the driver is really using DMA is to check the
interrupt count during disk i/o with vmstat; DMA mode will add an
extremely low number of interrupts, as compared to even multi-sector
PIO.

boot -v will give you a copious register dump of timing-related info
on Intel and VIAtech chipsets, as well as PIO/DMA mode information on
all hard drives. refer to your ATA and chipset documentation to
interpret these.

WHAT I'D LIKE FROM YOU and THINGS TO TEST:

reports. success reports, failure reports, any kind of reports. :)
send them to cgull+ide@smoke.marlboro.vt.us.

i'd also like to see the kernel messages from various BIOSes (boot -v;
dmesg), along with info on the motherboard and BIOS on that machine.

i'm especially interested in reports on how this code works on the
various Intel chipsets, and whether the register dump works
correctly. i'm also interested in hearing about other chipsets.

i'm especially interested in hearing success/failure reports for PCI
IDE controllers on cards, such as CMD's or Promise's new busmastering
IDE controllers.

UltraDMA-33 reports.

interoperation with ATAPI peripherals-- FreeBSD doesn't work with my
old Hitachi IDE CDROM, so i can't tell if I've broken anything. :)

i'd especially like to hear how the drive copes in DMA operation on
drives with bad sectors. i haven't been able to find any such yet.

success/failure reports on older IDE drives with early support for DMA
modes-- those introduced between 1.5 and 3 years ago, typically
ranging from perhaps 400MB to 1.6GB.

failure reports on operation with more than one drive would be
appreciated. the driver was developed with two drives on one
controller, the worst-case situation, and has been tested with one
drive on each controller, but you never know...

any reports of messages from the driver during normal operation,
especially "reverting to PIO mode", or "dmaverify odd vaddr or length"
(the DMA controller is strongly halfword oriented, and i'm curious to
know if any FreeBSD usage actually needs misaligned transfers).

performance reports. beware that bonnie's CPU usage reporting is
useless for IDE drives; the best test i've found has been to run a
program that runs a spin loop at an idle priority and reports how many
iterations it manages, and even that sometimes produces numbers i
don't believe. performance reports of multi-drive operation are
especially interesting; my system cannot sustain full throughput on
two drives on separate controllers, but that may just be a lame
motherboard.

THINGS I'M STILL MISSING CLUE ON:

* who's responsible for configuring DMA timing modes on IDE drives?
the BIOS or the driver?

* is there a spec for dealing with Ultra-DMA extensions?

* are there any chipsets or with bugs relating to DMA transfer that
should be blacklisted?

* are there any ATA interfaces that use some other kind of DMA
controller in conjunction with standard ATA protocol?

FINAL NOTE:

after having looked at the ATA-3 spec, all i can say is, "it's ugly".
*especially* electrically. the IDE bus is best modeled as an
unterminated transmission line, these days.

for maximum reliability, keep your IDE cables as short as possible and
as few as possible. from what i can tell, most current chipsets have
both IDE ports wired into a single buss, to a greater or lesser
degree. using two cables means you double the length of this bus.

SCSI may have its warts, but at least the basic analog design of the
bus is still somewhat reasonable. IDE passed beyond the veil two
years ago.

--John Hood, cgull@smoke.marlboro.vt.us
diff 27756 Tue Jul 29 12:57:25 MDT 1997 sos Add support for busmaster DMA on some PCI IDE chipsets.

I changed a few bits here and there, mainly renaming wd82371.c
to ide_pci.c now that it's supposed to handle different chipsets.

It runs on my P6 natoma board with two Maxtor drives, and also
on a Fujitsu machine I have at work with an Opti chipset and
a Quantum drive.

Submitted by:cgull@smoke.marlboro.vt.us <John Hood>

Original readme:

*** WARNING ***

This code has so far been tested on exactly one motherboard with two
identical drives known for their good DMA support.

This code, in the right circumstances, could corrupt data subtly,
silently, and invisibly, in much the same way that older PCI IDE
controllers do. It's ALPHA-quality code; there's one or two major
gaps in my understanding of PCI IDE still. Don't use this code on any
system with data that you care about; it's only good for hack boxes.
Expect that any data may be silently and randomly corrupted at any
moment. It's a disk driver. It has bugs. Disk drivers with bugs
munch data. It's a fact of life.

I also *STRONGLY* recommend getting a copy of your chipset's manual
and the ATA-2 or ATA-3 spec and making sure that timing modes on your
disk drives and IDE controller are being setup correctly by the BIOS--
because the driver makes only the lamest of attempts to do this just
now.

*** END WARNING ***

that said, i happen to think the code is working pretty well...

WHAT IT DOES:

this code adds support to the wd driver for bus mastering PCI IDE
controllers that follow the SFF-8038 standard. (all the bus mastering
PCI IDE controllers i've seen so far do follow this standard.) it
should provide busmastering on nearly any current P5 or P6 chipset,
specifically including any Intel chipset using one of the PIIX south
bridges-- this includes the '430FX, '430VX, '430HX, '430TX, '440LX,
and (i think) the Orion '450GX chipsets. specific support is also
included for the VIA Apollo VP-1 chipset, as it appears in the
relabeled "HXPro" incarnation seen on cheap US$70 taiwanese
motherboards (that's what's in my development machine). it works out
of the box on controllers that do DMA mode2; if my understanding is
correct, it'll probably work on Ultra-DMA33 controllers as well.
it'll probably work on busmastering IDE controllers in PCI slots, too,
but this is an area i am less sure about.

it cuts CPU usage considerably and improves drive performance
slightly. usable numbers are difficult to come by with existing
benchmark tools, but experimentation on my K5-P90 system, with VIA
VP-1 chipset and Quantum Fireball 1080 drives, shows that disk i/o on
raw partitions imposes perhaps 5% cpu load. cpu load during
filesystem i/o drops a lot, from near 100% to anywhere between 30% and
70%. (the improvement may not be as large on an Intel chipset; from
what i can tell, the VIA VP-1 may not be very efficient with PCI I/O.)
disk performance improves by 5% or 10% with these drives.

real, visible, end-user performance improvement on a single user
machine is about nil. :) a kernel compile was sped up by a whole three
seconds. it *does* feel a bit better-behaved when the system is
swapping heavily, but a better disk driver is not the fix for *that*
problem.

THE CODE:

this code is a patch to wd.c and wd82371.c, and associated header
files. it should be considered alpha code; more work needs to be
done.

wd.c has fairly clean patches to add calls to busmaster code, as
implemented in wd82371.c and potentially elsewhere (one could imagine,
say, a Mac having a different DMA controller).

wd82371.c has been considerably reworked: the wddma interface that it
presents has been changed (expect more changes), many bugs have been
fixed, a new internal interface has been added for supporting
different chipsets, and the PCI probe has been considerably extended.

the interface between wd82371.c and wd.c is still fairly clean, but
i'm not sure it's in the right place. there's a mess of issues around
ATA/ATAPI that need to be sorted out, including ATAPI support, CD-ROM
support, tape support, LS-120/Zip support, SFF-8038i DMA, UltraDMA,
PCI IDE controllers, bus probes, buggy controllers, controller timing
setup, drive timing setup, world peace and kitchen sinks. whatever
happens with all this and however it gets partitioned, it is fairly
clear that wd.c needs some significant rework-- probably a complete
rewrite.

timing setup on disk controllers is something i've entirely punted on.
on my development machine, it appears that the BIOS does at least some
of the necessary timing setup. i chose to restrict operation to
drives that are already configured for Mode4 PIO and Mode2 multiword
DMA, since the timing is essentially the same and many if not most
chipsets use the same control registers for DMA and PIO timing.

does anybody *know* whether BIOSes are required to do timing setup for
DMA modes on drives under their care?

error recovery is probably weak. early on in development, i was
getting drive errors induced by bugs in the driver; i used these to
flush out the worst of the bugs in the driver's error handling, but
problems may remain. i haven't got a drive with bad sectors i can
watch the driver flail on.

complaints about how wd82371.c has been reindented will be ignored
until the FreeBSD project has a real style policy, there is a
mechanism for individual authors to match it (indent flags or an emacs
c-mode or whatever), and it is enforced. if i'm going to use a source
style i don't like, it would help if i could figure out what it *is*
(style(9) is about half of a policy), and a way to reasonably
duplicate it. i ended up wasting a while trying to figure out what
the right thing to do was before deciding reformatting the whole thing
was the worst possible thing to do, except for all the other
possibilities.

i have maintained wd.c's indentation; that was not too hard,
fortunately.

TO INSTALL:

my dev box is freebsd 2.2.2 release. fortunately, wd.c is a living
fossil, and has diverged very little recently. included in this
tarball is a patch file, 'otherdiffs', for all files except wd82371.c,
my edited wd82371.c, a patch file, 'wd82371.c-diff-exact', against the
2.2.2 dist of 82371.c, and another patch file,
'wd82371.c-diff-whitespace', generated with diff -b (ignore
whitespace). most of you not using 2.2.2 will probably have to use
this last patchfile with 'patch --ignore-whitespace'. apply from the
kernel source tree root. as far as i can tell, this should apply
cleanly on anything from -current back to 2.2.2 and probably back to
2.2.0. you, the kernel hacker, can figure out what to do from here.
if you need more specific directions, you probably should not be
experimenting with this code yet.

to enable DMA support, set flag 0x2000 for that drive in your config
file or in userconfig, as you would the 32-bit-PIO flag. the driver
will then turn on DMA support if your drive and controller pass its
tests. it's a bit picky, probably. on discovering DMA mode failures
or disk errors or transfers that the DMA controller can't deal with,
the driver will fall back to PIO, so it is wise to setup the flags as
if PIO were still important.

'controller wdc0 at isa? port "IO_WD1" bio irq 14 flags 0xa0ffa0ff
vector wdintr' should work with nearly any PCI IDE controller.

i would *strongly* suggest booting single-user at first, and thrashing
the drive a bit while it's still mounted read-only. this should be
fairly safe, even if the driver goes completely out to lunch. it
might save you a reinstall.

one way to tell whether the driver is really using DMA is to check the
interrupt count during disk i/o with vmstat; DMA mode will add an
extremely low number of interrupts, as compared to even multi-sector
PIO.

boot -v will give you a copious register dump of timing-related info
on Intel and VIAtech chipsets, as well as PIO/DMA mode information on
all hard drives. refer to your ATA and chipset documentation to
interpret these.

WHAT I'D LIKE FROM YOU and THINGS TO TEST:

reports. success reports, failure reports, any kind of reports. :)
send them to cgull+ide@smoke.marlboro.vt.us.

i'd also like to see the kernel messages from various BIOSes (boot -v;
dmesg), along with info on the motherboard and BIOS on that machine.

i'm especially interested in reports on how this code works on the
various Intel chipsets, and whether the register dump works
correctly. i'm also interested in hearing about other chipsets.

i'm especially interested in hearing success/failure reports for PCI
IDE controllers on cards, such as CMD's or Promise's new busmastering
IDE controllers.

UltraDMA-33 reports.

interoperation with ATAPI peripherals-- FreeBSD doesn't work with my
old Hitachi IDE CDROM, so i can't tell if I've broken anything. :)

i'd especially like to hear how the drive copes in DMA operation on
drives with bad sectors. i haven't been able to find any such yet.

success/failure reports on older IDE drives with early support for DMA
modes-- those introduced between 1.5 and 3 years ago, typically
ranging from perhaps 400MB to 1.6GB.

failure reports on operation with more than one drive would be
appreciated. the driver was developed with two drives on one
controller, the worst-case situation, and has been tested with one
drive on each controller, but you never know...

any reports of messages from the driver during normal operation,
especially "reverting to PIO mode", or "dmaverify odd vaddr or length"
(the DMA controller is strongly halfword oriented, and i'm curious to
know if any FreeBSD usage actually needs misaligned transfers).

performance reports. beware that bonnie's CPU usage reporting is
useless for IDE drives; the best test i've found has been to run a
program that runs a spin loop at an idle priority and reports how many
iterations it manages, and even that sometimes produces numbers i
don't believe. performance reports of multi-drive operation are
especially interesting; my system cannot sustain full throughput on
two drives on separate controllers, but that may just be a lame
motherboard.

THINGS I'M STILL MISSING CLUE ON:

* who's responsible for configuring DMA timing modes on IDE drives?
the BIOS or the driver?

* is there a spec for dealing with Ultra-DMA extensions?

* are there any chipsets or with bugs relating to DMA transfer that
should be blacklisted?

* are there any ATA interfaces that use some other kind of DMA
controller in conjunction with standard ATA protocol?

FINAL NOTE:

after having looked at the ATA-3 spec, all i can say is, "it's ugly".
*especially* electrically. the IDE bus is best modeled as an
unterminated transmission line, these days.

for maximum reliability, keep your IDE cables as short as possible and
as few as possible. from what i can tell, most current chipsets have
both IDE ports wired into a single buss, to a greater or lesser
degree. using two cables means you double the length of this bus.

SCSI may have its warts, but at least the basic analog design of the
bus is still somewhat reasonable. IDE passed beyond the veil two
years ago.

--John Hood, cgull@smoke.marlboro.vt.us
diff 27756 Tue Jul 29 12:57:25 MDT 1997 sos Add support for busmaster DMA on some PCI IDE chipsets.

I changed a few bits here and there, mainly renaming wd82371.c
to ide_pci.c now that it's supposed to handle different chipsets.

It runs on my P6 natoma board with two Maxtor drives, and also
on a Fujitsu machine I have at work with an Opti chipset and
a Quantum drive.

Submitted by:cgull@smoke.marlboro.vt.us <John Hood>

Original readme:

*** WARNING ***

This code has so far been tested on exactly one motherboard with two
identical drives known for their good DMA support.

This code, in the right circumstances, could corrupt data subtly,
silently, and invisibly, in much the same way that older PCI IDE
controllers do. It's ALPHA-quality code; there's one or two major
gaps in my understanding of PCI IDE still. Don't use this code on any
system with data that you care about; it's only good for hack boxes.
Expect that any data may be silently and randomly corrupted at any
moment. It's a disk driver. It has bugs. Disk drivers with bugs
munch data. It's a fact of life.

I also *STRONGLY* recommend getting a copy of your chipset's manual
and the ATA-2 or ATA-3 spec and making sure that timing modes on your
disk drives and IDE controller are being setup correctly by the BIOS--
because the driver makes only the lamest of attempts to do this just
now.

*** END WARNING ***

that said, i happen to think the code is working pretty well...

WHAT IT DOES:

this code adds support to the wd driver for bus mastering PCI IDE
controllers that follow the SFF-8038 standard. (all the bus mastering
PCI IDE controllers i've seen so far do follow this standard.) it
should provide busmastering on nearly any current P5 or P6 chipset,
specifically including any Intel chipset using one of the PIIX south
bridges-- this includes the '430FX, '430VX, '430HX, '430TX, '440LX,
and (i think) the Orion '450GX chipsets. specific support is also
included for the VIA Apollo VP-1 chipset, as it appears in the
relabeled "HXPro" incarnation seen on cheap US$70 taiwanese
motherboards (that's what's in my development machine). it works out
of the box on controllers that do DMA mode2; if my understanding is
correct, it'll probably work on Ultra-DMA33 controllers as well.
it'll probably work on busmastering IDE controllers in PCI slots, too,
but this is an area i am less sure about.

it cuts CPU usage considerably and improves drive performance
slightly. usable numbers are difficult to come by with existing
benchmark tools, but experimentation on my K5-P90 system, with VIA
VP-1 chipset and Quantum Fireball 1080 drives, shows that disk i/o on
raw partitions imposes perhaps 5% cpu load. cpu load during
filesystem i/o drops a lot, from near 100% to anywhere between 30% and
70%. (the improvement may not be as large on an Intel chipset; from
what i can tell, the VIA VP-1 may not be very efficient with PCI I/O.)
disk performance improves by 5% or 10% with these drives.

real, visible, end-user performance improvement on a single user
machine is about nil. :) a kernel compile was sped up by a whole three
seconds. it *does* feel a bit better-behaved when the system is
swapping heavily, but a better disk driver is not the fix for *that*
problem.

THE CODE:

this code is a patch to wd.c and wd82371.c, and associated header
files. it should be considered alpha code; more work needs to be
done.

wd.c has fairly clean patches to add calls to busmaster code, as
implemented in wd82371.c and potentially elsewhere (one could imagine,
say, a Mac having a different DMA controller).

wd82371.c has been considerably reworked: the wddma interface that it
presents has been changed (expect more changes), many bugs have been
fixed, a new internal interface has been added for supporting
different chipsets, and the PCI probe has been considerably extended.

the interface between wd82371.c and wd.c is still fairly clean, but
i'm not sure it's in the right place. there's a mess of issues around
ATA/ATAPI that need to be sorted out, including ATAPI support, CD-ROM
support, tape support, LS-120/Zip support, SFF-8038i DMA, UltraDMA,
PCI IDE controllers, bus probes, buggy controllers, controller timing
setup, drive timing setup, world peace and kitchen sinks. whatever
happens with all this and however it gets partitioned, it is fairly
clear that wd.c needs some significant rework-- probably a complete
rewrite.

timing setup on disk controllers is something i've entirely punted on.
on my development machine, it appears that the BIOS does at least some
of the necessary timing setup. i chose to restrict operation to
drives that are already configured for Mode4 PIO and Mode2 multiword
DMA, since the timing is essentially the same and many if not most
chipsets use the same control registers for DMA and PIO timing.

does anybody *know* whether BIOSes are required to do timing setup for
DMA modes on drives under their care?

error recovery is probably weak. early on in development, i was
getting drive errors induced by bugs in the driver; i used these to
flush out the worst of the bugs in the driver's error handling, but
problems may remain. i haven't got a drive with bad sectors i can
watch the driver flail on.

complaints about how wd82371.c has been reindented will be ignored
until the FreeBSD project has a real style policy, there is a
mechanism for individual authors to match it (indent flags or an emacs
c-mode or whatever), and it is enforced. if i'm going to use a source
style i don't like, it would help if i could figure out what it *is*
(style(9) is about half of a policy), and a way to reasonably
duplicate it. i ended up wasting a while trying to figure out what
the right thing to do was before deciding reformatting the whole thing
was the worst possible thing to do, except for all the other
possibilities.

i have maintained wd.c's indentation; that was not too hard,
fortunately.

TO INSTALL:

my dev box is freebsd 2.2.2 release. fortunately, wd.c is a living
fossil, and has diverged very little recently. included in this
tarball is a patch file, 'otherdiffs', for all files except wd82371.c,
my edited wd82371.c, a patch file, 'wd82371.c-diff-exact', against the
2.2.2 dist of 82371.c, and another patch file,
'wd82371.c-diff-whitespace', generated with diff -b (ignore
whitespace). most of you not using 2.2.2 will probably have to use
this last patchfile with 'patch --ignore-whitespace'. apply from the
kernel source tree root. as far as i can tell, this should apply
cleanly on anything from -current back to 2.2.2 and probably back to
2.2.0. you, the kernel hacker, can figure out what to do from here.
if you need more specific directions, you probably should not be
experimenting with this code yet.

to enable DMA support, set flag 0x2000 for that drive in your config
file or in userconfig, as you would the 32-bit-PIO flag. the driver
will then turn on DMA support if your drive and controller pass its
tests. it's a bit picky, probably. on discovering DMA mode failures
or disk errors or transfers that the DMA controller can't deal with,
the driver will fall back to PIO, so it is wise to setup the flags as
if PIO were still important.

'controller wdc0 at isa? port "IO_WD1" bio irq 14 flags 0xa0ffa0ff
vector wdintr' should work with nearly any PCI IDE controller.

i would *strongly* suggest booting single-user at first, and thrashing
the drive a bit while it's still mounted read-only. this should be
fairly safe, even if the driver goes completely out to lunch. it
might save you a reinstall.

one way to tell whether the driver is really using DMA is to check the
interrupt count during disk i/o with vmstat; DMA mode will add an
extremely low number of interrupts, as compared to even multi-sector
PIO.

boot -v will give you a copious register dump of timing-related info
on Intel and VIAtech chipsets, as well as PIO/DMA mode information on
all hard drives. refer to your ATA and chipset documentation to
interpret these.

WHAT I'D LIKE FROM YOU and THINGS TO TEST:

reports. success reports, failure reports, any kind of reports. :)
send them to cgull+ide@smoke.marlboro.vt.us.

i'd also like to see the kernel messages from various BIOSes (boot -v;
dmesg), along with info on the motherboard and BIOS on that machine.

i'm especially interested in reports on how this code works on the
various Intel chipsets, and whether the register dump works
correctly. i'm also interested in hearing about other chipsets.

i'm especially interested in hearing success/failure reports for PCI
IDE controllers on cards, such as CMD's or Promise's new busmastering
IDE controllers.

UltraDMA-33 reports.

interoperation with ATAPI peripherals-- FreeBSD doesn't work with my
old Hitachi IDE CDROM, so i can't tell if I've broken anything. :)

i'd especially like to hear how the drive copes in DMA operation on
drives with bad sectors. i haven't been able to find any such yet.

success/failure reports on older IDE drives with early support for DMA
modes-- those introduced between 1.5 and 3 years ago, typically
ranging from perhaps 400MB to 1.6GB.

failure reports on operation with more than one drive would be
appreciated. the driver was developed with two drives on one
controller, the worst-case situation, and has been tested with one
drive on each controller, but you never know...

any reports of messages from the driver during normal operation,
especially "reverting to PIO mode", or "dmaverify odd vaddr or length"
(the DMA controller is strongly halfword oriented, and i'm curious to
know if any FreeBSD usage actually needs misaligned transfers).

performance reports. beware that bonnie's CPU usage reporting is
useless for IDE drives; the best test i've found has been to run a
program that runs a spin loop at an idle priority and reports how many
iterations it manages, and even that sometimes produces numbers i
don't believe. performance reports of multi-drive operation are
especially interesting; my system cannot sustain full throughput on
two drives on separate controllers, but that may just be a lame
motherboard.

THINGS I'M STILL MISSING CLUE ON:

* who's responsible for configuring DMA timing modes on IDE drives?
the BIOS or the driver?

* is there a spec for dealing with Ultra-DMA extensions?

* are there any chipsets or with bugs relating to DMA transfer that
should be blacklisted?

* are there any ATA interfaces that use some other kind of DMA
controller in conjunction with standard ATA protocol?

FINAL NOTE:

after having looked at the ATA-3 spec, all i can say is, "it's ugly".
*especially* electrically. the IDE bus is best modeled as an
unterminated transmission line, these days.

for maximum reliability, keep your IDE cables as short as possible and
as few as possible. from what i can tell, most current chipsets have
both IDE ports wired into a single buss, to a greater or lesser
degree. using two cables means you double the length of this bus.

SCSI may have its warts, but at least the basic analog design of the
bus is still somewhat reasonable. IDE passed beyond the veil two
years ago.

--John Hood, cgull@smoke.marlboro.vt.us
H A Doptionsdiff 47522 Wed May 26 13:24:35 MDT 1999 roger Updated options for the Bt848/Bt878 driver
This includes the BKTR_430_FX_MODE and BKTR_SIS_VIA_MODE options which make
Bt878/879 cards work better on 430FX and old SiS/VIA/OPTi boards
/freebsd-10.3-release/sys/netinet6/
H A Dudp6_usrreq.cdiff 100132 Mon Jul 15 19:25:46 MDT 2002 ume - fixed a bug that we can't send a packet to ipv4mapped ipv6 address
using a udp6 socket without bind(2)ing.
- fbsd4/430 reported from the FreeBSD team.
- this fix is different from the fix reported in the above PR. i think
this better, but we need some test.

Obtained from: KAME
MFC after: 3 weeks

Completed in 875 milliseconds