#
299223 |
|
07-May-2016 |
rmacklem |
MFC: r298523 Allow the NFSv4 server to reply NFSERR_WRONGSEC for the SetClientID operation.
It was reported via email that a Linux client couldn't do a Kerberized NFS mount when only "sec=krb5" was specified for the exports. The Linux client attempted a mount via krb5i and the server replied NFSERR_SERVERFAULT. Although NFSERR_WRONGSEC isn't listed as an error for SetClientID, I think it is the correct reply, so this patch enables that. I do not know if this fixes the mount attempt, but adding "krb5i" to the list of allowed security flavours does allow the mount to work.
|
#
284216 |
|
10-Jun-2015 |
rmacklem |
MFC: r283635 Make the size of the hash tables used by the NFSv4 server tunable. No appreciable change in performance was observed after increasing the sizes of these tables and then testing with a single client. However, there was an email that indicated high CPU overheads for a heavily loaded NFSv4 and it is hoped that increasing the sizes of the hash tables via these tunables might help. The tables remain the same size by default.
|
#
273877 |
|
31-Oct-2014 |
araujo |
MFC r273159: Add two sysctl(8) to enable/disable NFSv4 server to check when setting user nobody and/or setting group nogroup as owner of a file or directory. Usually at the client side, if there is an username that is not in the client's passwd database, some clients will send 'nobody@<your.dns.domain>' in the wire and the NFSv4 server will treat it as an ERROR. However, if you have a valid user nobody in your passwd database, the NFSv4 server will treat it as a NFSERR_BADOWNER as its believes the client doesn't has the username mapped.
Submitted by: Loic Blot <loic.blot@unix-experience.fr> Reviewed by: rmacklem Approved by: rmacklem Sponsored by: QNAP Systems Inc.
|
#
269398 |
|
01-Aug-2014 |
rmacklem |
MFC: r268115 Merge the NFSv4.1 server code in projects/nfsv4.1-server over into head. The code is not believed to have any effect on the semantics of non-NFSv4.1 server behaviour. It is a rather large merge, but I am hoping that there will not be any regressions for the NFS server.
|
#
261055 |
|
22-Jan-2014 |
mav |
MFC r260229, r260258, r260367, r260390, r260459, r260648: Rework NFS Duplicate Request Cache cleanup logic.
- Introduce additional hash to group requests by hash of sockref. This allows to process TCP acknowledgements without looping though all the cache, and as result allows to do it every time. - Indroduce additional callbacks to notify application layer about sockets disconnection. Without this last few requests processed just before socket disconnection never processed their ACKs and stuck in cache for many hours. - Implement transport-specific method for tracking reply acknowledgements. New implementation does not cross multiple stack layers to get the data and does not have race conditions that previously made some requests stuck in cache. This could be done more efficiently at sockbuf layer, but that would broke some KBIs, while I don't know other consumers for it aside NFS. - Instead of traversing all DRC twice per request, run cleaning only once per request, and except in some conditions traverse only single hash slot at a time.
Together this limits NFS DRC growth only to situations of real connectivity problems. If network is working well, and so all replies are acknowledged, cache remains almost empty even after hours of heavy load. Without this change on the same test cache was growing to many thousand requests even with perfectly working local network.
As another result this reduces CPU time spent on the DRC handling during SPEC NFS benchmark from about 10% to 0.5%.
Sponsored by: iXsystems, Inc.
|
#
284216 |
|
10-Jun-2015 |
rmacklem |
MFC: r283635 Make the size of the hash tables used by the NFSv4 server tunable. No appreciable change in performance was observed after increasing the sizes of these tables and then testing with a single client. However, there was an email that indicated high CPU overheads for a heavily loaded NFSv4 and it is hoped that increasing the sizes of the hash tables via these tunables might help. The tables remain the same size by default.
|
#
273877 |
|
31-Oct-2014 |
araujo |
MFC r273159: Add two sysctl(8) to enable/disable NFSv4 server to check when setting user nobody and/or setting group nogroup as owner of a file or directory. Usually at the client side, if there is an username that is not in the client's passwd database, some clients will send 'nobody@<your.dns.domain>' in the wire and the NFSv4 server will treat it as an ERROR. However, if you have a valid user nobody in your passwd database, the NFSv4 server will treat it as a NFSERR_BADOWNER as its believes the client doesn't has the username mapped.
Submitted by: Loic Blot <loic.blot@unix-experience.fr> Reviewed by: rmacklem Approved by: rmacklem Sponsored by: QNAP Systems Inc.
|
#
269398 |
|
01-Aug-2014 |
rmacklem |
MFC: r268115 Merge the NFSv4.1 server code in projects/nfsv4.1-server over into head. The code is not believed to have any effect on the semantics of non-NFSv4.1 server behaviour. It is a rather large merge, but I am hoping that there will not be any regressions for the NFS server.
|
#
261055 |
|
22-Jan-2014 |
mav |
MFC r260229, r260258, r260367, r260390, r260459, r260648: Rework NFS Duplicate Request Cache cleanup logic.
- Introduce additional hash to group requests by hash of sockref. This allows to process TCP acknowledgements without looping though all the cache, and as result allows to do it every time. - Indroduce additional callbacks to notify application layer about sockets disconnection. Without this last few requests processed just before socket disconnection never processed their ACKs and stuck in cache for many hours. - Implement transport-specific method for tracking reply acknowledgements. New implementation does not cross multiple stack layers to get the data and does not have race conditions that previously made some requests stuck in cache. This could be done more efficiently at sockbuf layer, but that would broke some KBIs, while I don't know other consumers for it aside NFS. - Instead of traversing all DRC twice per request, run cleaning only once per request, and except in some conditions traverse only single hash slot at a time.
Together this limits NFS DRC growth only to situations of real connectivity problems. If network is working well, and so all replies are acknowledged, cache remains almost empty even after hours of heavy load. Without this change on the same test cache was growing to many thousand requests even with perfectly working local network.
As another result this reduces CPU time spent on the DRC handling during SPEC NFS benchmark from about 10% to 0.5%.
Sponsored by: iXsystems, Inc.
|