-
Notifications
You must be signed in to change notification settings - Fork 96
A10 OLinuXino LIME
Siarhei Siamashka edited this page Jan 4, 2014
·
2 revisions
Note: Actually only simulated using cubieboard1 so far, but with u-boot patched to use lime dram settings (16-bit dram bus width). The display is blanked in order to eliminate the memory bandwidth drain caused by framebuffer scanout.
# echo 1 > /sys/devices/platform/disp/graphics/fb0/blank
# cpufreq-set -g performance
# a10-meminfo
dram_clk = 480
dram_type = 3
dram_rank_num = 1
dram_chip_density = 4096
dram_io_width = 16
dram_bus_width = 16
dram_cas = 9
dram_zq = 0x7b
dram_odt_en = 0
dram_tpr0 = 0x42d899b7
dram_tpr1 = 0xa090
dram_tpr2 = 0x22a00
dram_tpr3 = 0x0
dram_emr1 = 0x4
dram_emr2 = 0x10
dram_emr3 = 0x0
# cat /proc/cpuinfo
Processor : ARMv7 Processor rev 2 (v7l)
BogoMIPS : 1001.88
Features : swp half thumb fastmult vfp edsp neon vfpv3 tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x3
CPU part : 0xc08
CPU revision : 2
Hardware : sun4i
Revision : 0000
Serial : 0000000000000000
tinymembench v0.3.9 (simple benchmark for memory throughput and latency)
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 291.4 MB/s
C copy : 288.5 MB/s
C copy prefetched (32 bytes step) : 519.3 MB/s
C copy prefetched (64 bytes step) : 519.2 MB/s
C 2-pass copy : 265.5 MB/s
C 2-pass copy prefetched (32 bytes step) : 400.5 MB/s
C 2-pass copy prefetched (64 bytes step) : 400.5 MB/s
C fill : 1458.8 MB/s
---
standard memcpy : 479.8 MB/s
standard memset : 1458.6 MB/s
---
NEON read : 626.2 MB/s
NEON read prefetched (32 bytes step) : 634.1 MB/s
NEON read prefetched (64 bytes step) : 626.1 MB/s
NEON read 2 data streams : 625.6 MB/s
NEON read 2 data streams prefetched (32 bytes step) : 633.9 MB/s
NEON read 2 data streams prefetched (64 bytes step) : 626.0 MB/s
NEON copy : 574.1 MB/s
NEON copy prefetched (32 bytes step) : 573.3 MB/s
NEON copy prefetched (64 bytes step) : 566.2 MB/s
NEON unrolled copy : 539.8 MB/s
NEON unrolled copy prefetched (32 bytes step) : 565.9 MB/s
NEON unrolled copy prefetched (64 bytes step) : 570.0 MB/s
NEON copy backwards : 579.8 MB/s
NEON copy backwards prefetched (32 bytes step) : 580.3 MB/s
NEON copy backwards prefetched (64 bytes step) : 586.6 MB/s
NEON 2-pass copy : 428.3 MB/s
NEON 2-pass copy prefetched (32 bytes step) : 454.5 MB/s
NEON 2-pass copy prefetched (64 bytes step) : 455.6 MB/s
NEON unrolled 2-pass copy : 423.0 MB/s
NEON unrolled 2-pass copy prefetched (32 bytes step) : 441.3 MB/s
NEON unrolled 2-pass copy prefetched (64 bytes step) : 455.0 MB/s
NEON fill : 1463.0 MB/s (0.6%)
NEON fill backwards : 1463.6 MB/s (0.5%)
VFP copy : 453.2 MB/s
VFP 2-pass copy : 338.4 MB/s
ARM fill (STRD) : 1459.3 MB/s
ARM fill (STM with 8 registers) : 1461.3 MB/s (0.5%)
ARM fill (STM with 4 registers) : 1460.0 MB/s
ARM copy prefetched (incr pld) : 538.9 MB/s (0.6%)
ARM copy prefetched (wrap pld) : 518.6 MB/s (0.5%)
ARM 2-pass copy prefetched (incr pld) : 425.8 MB/s (0.7%)
ARM 2-pass copy prefetched (wrap pld) : 439.9 MB/s
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 5.0 ns / 9.7 ns
131072 : 7.5 ns / 14.8 ns
262144 : 31.0 ns / 58.9 ns
524288 : 117.7 ns / 235.5 ns
1048576 : 161.3 ns / 323.3 ns
2097152 : 183.6 ns / 367.9 ns
4194304 : 195.6 ns / 391.7 ns
8388608 : 203.7 ns / 406.9 ns
16777216 : 211.3 ns / 421.3 ns
33554432 : 222.4 ns / 443.0 ns
67108864 : 244.7 ns / 487.3 ns
Kernel 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 GNU/Linux Under xorg, no compositor active, no browser or other cpu hogs.
tinymembench v0.4.9 (simple benchmark for memory thr
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 2949.7 MB/s (3.8%)
C copy backwards (32 byte blocks) : 3011.8 MB/s
C copy backwards (64 byte blocks) : 3029.2 MB/s
C copy : 3642.2 MB/s (4.1%)
C copy prefetched (32 bytes step) : 3824.4 MB/s (0.3%)
C copy prefetched (64 bytes step) : 3825.3 MB/s (0.4%)
C 2-pass copy : 2726.2 MB/s
C 2-pass copy prefetched (32 bytes step) : 2902.6 MB/s (2.5%)
C 2-pass copy prefetched (64 bytes step) : 2928.3 MB/s (0.3%)
C fill : 8541.0 MB/s (0.2%)
C fill (shuffle within 16 byte blocks) : 8518.5 MB/s (2.1%)
C fill (shuffle within 32 byte blocks) : 8537.1 MB/s (0.1%)
C fill (shuffle within 64 byte blocks) : 8528.7 MB/s (0.2%)
---
standard memcpy : 3558.8 MB/s
standard memset : 8520.2 MB/s
---
NEON LDP/STP copy : 3633.9 MB/s (4.2%)
NEON LDP/STP copy pldl2strm (32 bytes step) : 1451.0 MB/s (0.3%)
NEON LDP/STP copy pldl2strm (64 bytes step) : 1450.9 MB/s (0.5%)
NEON LDP/STP copy pldl1keep (32 bytes step) : 3882.5 MB/s (3.9%)
NEON LDP/STP copy pldl1keep (64 bytes step) : 3884.0 MB/s (0.4%)
NEON LD1/ST1 copy : 3630.8 MB/s (0.3%)
NEON STP fill : 8537.8 MB/s
NEON STNP fill : 8544.9 MB/s (1.7%)
ARM LDP/STP copy : 3635.8 MB/s (0.3%)
ARM STP fill : 8544.8 MB/s (0.1%)
ARM STNP fill : 8549.2 MB/s (1.0%)
==========================================================================
== Framebuffer read tests. ==
== ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled. ==
== Writes to such framebuffers are quite fast, but reads are much ==
== slower and very sensitive to the alignment and the selection of ==
== CPU instructions which are used for accessing memory. ==
== ==
== Many x86 systems allocate the framebuffer in the GPU memory, ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
== PCI-E is asymmetric and handles reads a lot worse than writes. ==
== ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall ==
== performance improvement. For example, the xf86-video-fbturbo DDX ==
== uses this trick. ==
==========================================================================
NEON LDP/STP copy (from framebuffer) : 766.0 MB/s
NEON LDP/STP 2-pass copy (from framebuffer) : 688.8 MB/s
NEON LD1/ST1 copy (from framebuffer) : 770.6 MB/s (0.1%)
NEON LD1/ST1 2-pass copy (from framebuffer) : 681.3 MB/s (0.3%)
ARM LDP/STP copy (from framebuffer) : 766.1 MB/s
ARM LDP/STP 2-pass copy (from framebuffer) : 689.1 MB/s
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read, [MADV_NOHUGEPAGE]
1024 : 0.0 ns / 0.1 ns
2048 : 0.0 ns / 0.1 ns
4096 : 0.0 ns / 0.1 ns
8192 : 0.0 ns / 0.1 ns
16384 : 0.1 ns / 0.1 ns
32768 : 1.7 ns / 2.9 ns
65536 : 6.4 ns / 9.5 ns
131072 : 9.6 ns / 12.3 ns
262144 : 13.7 ns / 17.0 ns
524288 : 15.8 ns / 19.7 ns
1048576 : 17.3 ns / 22.1 ns
2097152 : 42.1 ns / 64.2 ns
4194304 : 98.5 ns / 138.1 ns
8388608 : 143.9 ns / 186.3 ns
16777216 : 167.2 ns / 211.2 ns
33554432 : 180.1 ns / 227.1 ns
67108864 : 200.0 ns / 260.2 ns
block size : single random read / dual random read, [MADV_HUGEPAGE]
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 6.4 ns / 9.4 ns
131072 : 9.5 ns / 12.2 ns
262144 : 11.2 ns / 13.1 ns
524288 : 12.1 ns / 13.5 ns
1048576 : 12.8 ns / 13.6 ns
2097152 : 27.0 ns / 33.0 ns
4194304 : 90.6 ns / 127.8 ns
8388608 : 123.9 ns / 153.8 ns
16777216 : 139.5 ns / 161.2 ns
33554432 : 147.2 ns / 163.6 ns
67108864 : 154.0 ns / 167.6 ns