-
Notifications
You must be signed in to change notification settings - Fork 96
Clevo Notebook (NL5xNU)
Alex Myczko edited this page Jan 2, 2024
·
1 revision
Mobo: Clevo Notebook model: NL5xNU serial: N/A UEFI: INSYDE v: 1.07.17-NC
date: 10/26/2022
lotuspsychje@r00tb0x:~$ lscpu
Architectuur: x86_64
CPU-modus(sen): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Bytevolgorde: Little Endian
CPU's: 16
Online CPU's-lijst: 0-15
Producent-ID: AuthenticAMD
Modelnaam: AMD Ryzen 7 5825U with Radeon Graphics
CPU-familie: 25
Model: 80
Draden per kern: 2
Kernen per voet: 8
CPU-voeten: 1
Stepping: 0
CPU(s) scaling MHz: 33%
max. CPU-frequentie (MHz): 4546,0000
min. CPU-frequentie (MHz): 400,0000
BogoMIPS: 3992,35
Vlaggen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
pge mca cmov pat pse36 clflush mmx fxsr sse sse2
ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm c
onstant_tsc rep_good nopl nonstop_tsc cpuid extd_
apicid aperfmperf rapl pni pclmulqdq monitor ssse
3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes
xsave avx f16c rdrand lahf_lm cmp_legacy svm exta
pic cr8_legacy abm sse4a misalignsse 3dnowprefetc
h osvw ibs skinit wdt tce topoext perfctr_core pe
rfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_
l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsg
sbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a
rdseed adx smap clflushopt clwb sha_ni xsaveopt x
savec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mb
m_total cqm_mbm_local clzero irperf xsaveerptr rd
pru wbnoinvd cppc arat npt lbrv svm_lock nrip_sav
e tsc_scale vmcb_clean flushbyasid decodeassists
pausefilter pfthreshold avic v_vmsave_vmload vgif
v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid
overflow_recov succor smca fsrm
Virtualization features:
Virtualisatie: AMD-V
Caches (sum of all):
L1d: 256 KiB (8 instances)
L1i: 256 KiB (8 instances)
L2: 4 MiB (8 instances)
L3: 16 MiB (1 instance)
NUMA:
NUMA-nodes: 1
NUMA-node0 CPU('s): 0-15
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Mitigation; safe RET, no microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via
prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user p
ointer sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW
, STIBP always-on, RSB filling, PBRSB-eIBRS Not a
ffected
Srbds: Not affected
Tsx async abort: Not affected
lotuspsychje@r00tb0x:~$ tinymembench
tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 8290.8 MB/s (2.7%)
C copy backwards (32 byte blocks) : 7816.5 MB/s (1.2%)
C copy backwards (64 byte blocks) : 8244.5 MB/s (3.0%)
C copy : 8516.3 MB/s (0.5%)
C copy prefetched (32 bytes step) : 8642.8 MB/s (0.9%)
C copy prefetched (64 bytes step) : 8594.3 MB/s (0.7%)
C 2-pass copy : 7120.6 MB/s (1.6%)
C 2-pass copy prefetched (32 bytes step) : 8324.7 MB/s (3.4%)
C 2-pass copy prefetched (64 bytes step) : 8410.5 MB/s (3.3%)
C fill : 12443.3 MB/s (1.4%)
C fill (shuffle within 16 byte blocks) : 12449.5 MB/s (1.9%)
C fill (shuffle within 32 byte blocks) : 12436.1 MB/s (1.1%)
C fill (shuffle within 64 byte blocks) : 11893.5 MB/s (3.7%)
---
standard memcpy : 14387.8 MB/s (7.1%)
standard memset : 27424.8 MB/s (5.9%)
---
MOVSB copy : 15090.9 MB/s (5.2%)
MOVSD copy : 14354.2 MB/s (6.8%)
SSE2 copy : 9452.7 MB/s (3.2%)
SSE2 nontemporal copy : 13618.4 MB/s (6.1%)
SSE2 copy prefetched (32 bytes step) : 10516.0 MB/s (3.8%)
SSE2 copy prefetched (64 bytes step) : 9514.4 MB/s (2.0%)
SSE2 nontemporal copy prefetched (32 bytes step) : 13920.3 MB/s (1.7%)
SSE2 nontemporal copy prefetched (64 bytes step) : 13864.4 MB/s (4.5%)
SSE2 2-pass copy : 8003.2 MB/s (7.9%)
SSE2 2-pass copy prefetched (32 bytes step) : 9139.5 MB/s (5.0%)
SSE2 2-pass copy prefetched (64 bytes step) : 7797.9 MB/s (4.8%)
SSE2 2-pass nontemporal copy : 3736.0 MB/s (3.4%)
SSE2 fill : 13542.9 MB/s (5.2%)
SSE2 nontemporal fill : 27196.9 MB/s (1.8%)
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read, [MADV_NOHUGEPAGE]
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 0.9 ns / 1.3 ns
131072 : 1.4 ns / 1.7 ns
262144 : 1.6 ns / 1.8 ns
524288 : 3.6 ns / 4.6 ns
1048576 : 7.1 ns / 9.2 ns
2097152 : 9.6 ns / 11.0 ns
4194304 : 10.4 ns / 11.7 ns
8388608 : 10.8 ns / 12.0 ns
16777216 : 26.6 ns / 37.9 ns
33554432 : 64.3 ns / 96.8 ns
67108864 : 91.0 ns / 115.0 ns
block size : single random read / dual random read, [MADV_HUGEPAGE]
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 0.9 ns / 1.3 ns
131072 : 1.3 ns / 1.6 ns
262144 : 1.6 ns / 1.8 ns
524288 : 1.8 ns / 1.9 ns
1048576 : 5.7 ns / 7.5 ns
2097152 : 7.5 ns / 9.1 ns
4194304 : 9.0 ns / 9.7 ns
8388608 : 8.8 ns / 9.8 ns
16777216 : 11.4 ns / 12.5 ns
33554432 : 56.1 ns / 81.0 ns
67108864 : 76.8 ns / 97.2 ns
lotuspsychje@r00tb0x:~$
Kernel 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 GNU/Linux Under xorg, no compositor active, no browser or other cpu hogs.
tinymembench v0.4.9 (simple benchmark for memory thr
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 2949.7 MB/s (3.8%)
C copy backwards (32 byte blocks) : 3011.8 MB/s
C copy backwards (64 byte blocks) : 3029.2 MB/s
C copy : 3642.2 MB/s (4.1%)
C copy prefetched (32 bytes step) : 3824.4 MB/s (0.3%)
C copy prefetched (64 bytes step) : 3825.3 MB/s (0.4%)
C 2-pass copy : 2726.2 MB/s
C 2-pass copy prefetched (32 bytes step) : 2902.6 MB/s (2.5%)
C 2-pass copy prefetched (64 bytes step) : 2928.3 MB/s (0.3%)
C fill : 8541.0 MB/s (0.2%)
C fill (shuffle within 16 byte blocks) : 8518.5 MB/s (2.1%)
C fill (shuffle within 32 byte blocks) : 8537.1 MB/s (0.1%)
C fill (shuffle within 64 byte blocks) : 8528.7 MB/s (0.2%)
---
standard memcpy : 3558.8 MB/s
standard memset : 8520.2 MB/s
---
NEON LDP/STP copy : 3633.9 MB/s (4.2%)
NEON LDP/STP copy pldl2strm (32 bytes step) : 1451.0 MB/s (0.3%)
NEON LDP/STP copy pldl2strm (64 bytes step) : 1450.9 MB/s (0.5%)
NEON LDP/STP copy pldl1keep (32 bytes step) : 3882.5 MB/s (3.9%)
NEON LDP/STP copy pldl1keep (64 bytes step) : 3884.0 MB/s (0.4%)
NEON LD1/ST1 copy : 3630.8 MB/s (0.3%)
NEON STP fill : 8537.8 MB/s
NEON STNP fill : 8544.9 MB/s (1.7%)
ARM LDP/STP copy : 3635.8 MB/s (0.3%)
ARM STP fill : 8544.8 MB/s (0.1%)
ARM STNP fill : 8549.2 MB/s (1.0%)
==========================================================================
== Framebuffer read tests. ==
== ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled. ==
== Writes to such framebuffers are quite fast, but reads are much ==
== slower and very sensitive to the alignment and the selection of ==
== CPU instructions which are used for accessing memory. ==
== ==
== Many x86 systems allocate the framebuffer in the GPU memory, ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
== PCI-E is asymmetric and handles reads a lot worse than writes. ==
== ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall ==
== performance improvement. For example, the xf86-video-fbturbo DDX ==
== uses this trick. ==
==========================================================================
NEON LDP/STP copy (from framebuffer) : 766.0 MB/s
NEON LDP/STP 2-pass copy (from framebuffer) : 688.8 MB/s
NEON LD1/ST1 copy (from framebuffer) : 770.6 MB/s (0.1%)
NEON LD1/ST1 2-pass copy (from framebuffer) : 681.3 MB/s (0.3%)
ARM LDP/STP copy (from framebuffer) : 766.1 MB/s
ARM LDP/STP 2-pass copy (from framebuffer) : 689.1 MB/s
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read, [MADV_NOHUGEPAGE]
1024 : 0.0 ns / 0.1 ns
2048 : 0.0 ns / 0.1 ns
4096 : 0.0 ns / 0.1 ns
8192 : 0.0 ns / 0.1 ns
16384 : 0.1 ns / 0.1 ns
32768 : 1.7 ns / 2.9 ns
65536 : 6.4 ns / 9.5 ns
131072 : 9.6 ns / 12.3 ns
262144 : 13.7 ns / 17.0 ns
524288 : 15.8 ns / 19.7 ns
1048576 : 17.3 ns / 22.1 ns
2097152 : 42.1 ns / 64.2 ns
4194304 : 98.5 ns / 138.1 ns
8388608 : 143.9 ns / 186.3 ns
16777216 : 167.2 ns / 211.2 ns
33554432 : 180.1 ns / 227.1 ns
67108864 : 200.0 ns / 260.2 ns
block size : single random read / dual random read, [MADV_HUGEPAGE]
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 6.4 ns / 9.4 ns
131072 : 9.5 ns / 12.2 ns
262144 : 11.2 ns / 13.1 ns
524288 : 12.1 ns / 13.5 ns
1048576 : 12.8 ns / 13.6 ns
2097152 : 27.0 ns / 33.0 ns
4194304 : 90.6 ns / 127.8 ns
8388608 : 123.9 ns / 153.8 ns
16777216 : 139.5 ns / 161.2 ns
33554432 : 147.2 ns / 163.6 ns
67108864 : 154.0 ns / 167.6 ns