Searched hist:35388 (Results 1 - 9 of 9) sorted by last modified time

/linux-master/drivers/net/ethernet/freescale/
H A Dfec_main.cdiff 79f33912 Wed Jun 11 18:16:23 MDT 2014 Nimrod Andy <B38611@freescale.com> net: fec: Add software TSO support

Add software TSO support for FEC.
This feature allows to improve outbound throughput performance.

Tested on imx6dl sabresd board, running iperf tcp tests shows:
- 16.2% improvement comparing with FEC SG patch
- 82% improvement comparing with NO SG & TSO patch

$ ethtool -K eth0 tso on
$ iperf -c 10.192.242.167 -t 3 &
[ 3] local 10.192.242.108 port 35388 connected with 10.192.242.167 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 3.0 sec 181 MBytes 506 Mbits/sec

During the testing, CPU loading is 30%.
Since imx6dl FEC Bandwidth is limited to SOC system bus bandwidth, the
performance with SW TSO is a milestone.

CC: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: David Laight <David.Laight@ACULAB.COM>
CC: Li Frank <B20596@freescale.com>
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
H A Dfec.hdiff 79f33912 Wed Jun 11 18:16:23 MDT 2014 Nimrod Andy <B38611@freescale.com> net: fec: Add software TSO support

Add software TSO support for FEC.
This feature allows to improve outbound throughput performance.

Tested on imx6dl sabresd board, running iperf tcp tests shows:
- 16.2% improvement comparing with FEC SG patch
- 82% improvement comparing with NO SG & TSO patch

$ ethtool -K eth0 tso on
$ iperf -c 10.192.242.167 -t 3 &
[ 3] local 10.192.242.108 port 35388 connected with 10.192.242.167 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 3.0 sec 181 MBytes 506 Mbits/sec

During the testing, CPU loading is 30%.
Since imx6dl FEC Bandwidth is limited to SOC system bus bandwidth, the
performance with SW TSO is a milestone.

CC: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: David Laight <David.Laight@ACULAB.COM>
CC: Li Frank <B20596@freescale.com>
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
/linux-master/drivers/mtd/ubi/
H A Dubi.hdiff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
diff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
H A Dbuild.cdiff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
diff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
H A Dwl.hdiff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
diff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
H A Dfastmap-wl.cdiff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
diff 90e0be56 Mon Aug 28 00:38:43 MDT 2023 Zhihao Cheng <chengzhihao1@huawei.com> ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs

The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 960 29224 29282 29362
100000 .. inf: 64 117897 117934 117940
---------------------------------------------------------
Total : 1024 29224 34822 117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8128 2253 2321 2387
10000 .. 99999: 64 35387 35387 35388
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2253 2579 35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi->fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 0 0 0 0
10000 .. 99999: 1024 33801 33997 34056
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 1024 33801 33997 34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from to count min avg max
---------------------------------------------------------
0 .. 9: 0 0 0 0
10 .. 99: 0 0 0 0
100 .. 999: 0 0 0 0
1000 .. 9999: 8192 2205 2397 2460
10000 .. 99999: 0 0 0 0
100000 .. inf: 0 0 0 0
---------------------------------------------------------
Total : 8192 2205 2397 2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
Device A: 34056 - 33801 = 255
Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
/linux-master/drivers/net/ethernet/sfc/
H A Dethtool.cdiff d9317aea Thu Jan 23 07:35:48 MST 2014 Ben Hutchings <bhutchings@solarflare.com> sfc: Use the correct maximum TX DMA ring size for SFC9100

As part of a workaround for a hardware erratum in the SFC9100 family
(SF bug 35388), the TX_DESC_UPD_DWORD register address is also used
for communicating with the event block, and only descriptor pointer
values < 2048 are valid.

If the TX DMA ring size is increased to 4096 descriptors (which the
firmware still allows) then we may write a descriptor pointer
value >= 2048, which has entirely different and undesirable effects!

Limit the TX DMA ring size correctly when this workaround is in
effect.

Fixes: 8127d661e77f ('sfc: Add support for Solarflare SFC9100 family')
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
H A Defx.hdiff d9317aea Thu Jan 23 07:35:48 MST 2014 Ben Hutchings <bhutchings@solarflare.com> sfc: Use the correct maximum TX DMA ring size for SFC9100

As part of a workaround for a hardware erratum in the SFC9100 family
(SF bug 35388), the TX_DESC_UPD_DWORD register address is also used
for communicating with the event block, and only descriptor pointer
values < 2048 are valid.

If the TX DMA ring size is increased to 4096 descriptors (which the
firmware still allows) then we may write a descriptor pointer
value >= 2048, which has entirely different and undesirable effects!

Limit the TX DMA ring size correctly when this workaround is in
effect.

Fixes: 8127d661e77f ('sfc: Add support for Solarflare SFC9100 family')
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
/linux-master/tools/testing/selftests/net/forwarding/
H A Dmirror_vlan.sh35388a6a Thu May 24 08:27:48 MDT 2018 Petr Machata <petrm@mellanox.com> selftests: forwarding: Test mirror-to-vlan

Test for "tc action mirred egress mirror" that mirrors to a vlan device.
- test_vlan() tests that the packets get mirrored
- test_tagged_vlan() tests that the mirrored packets have correct inner
VLAN tag.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

Completed in 505 milliseconds