-
Notifications
You must be signed in to change notification settings - Fork 78
Floating point? #3
Comments
That's correct, storing FP state in interrupt handlers is the main thing to be done in order to implement FP in Cortex-M[347]. It can be done, but it'll take some effort, and I currently do not have the resources to do it. If you are willing to contribute patches, I can consider them; also if you know someone interested to sponsor QEMU development, I can reconsider priorities. |
Would you look favorably upon patches that do everything except the lazy portion? e.g. store the state upfront only (in lazy or unconditional modes) or never store the state? |
From what I could understand the QEMU maintainers strict requirements, I doubt they will accept non-standard implementations, and I concur to this position. |
OK. I won't bother then. |
I plan to fix some other issues in qemu, and possibly address the M4 FPU issues. Are you still interested to contribute to the project? |
"name" is freed after visiting options, instead use the first NetClientState name. Adds a few assert() for clarifying and checking some impossible states. READ of size 1 at 0x602000000990 thread T0 #0 0x7f6b251c570c (/lib64/libasan.so.2+0x4770c) #1 0x5566dc380600 in qemu_find_net_clients_except net/net.c:824 #2 0x5566dc39bac7 in net_vhost_user_event net/vhost-user.c:193 #3 0x5566dbee862a in qemu_chr_be_event /home/elmarco/src/qemu/qemu-char.c:201 #4 0x5566dbef2890 in tcp_chr_disconnect /home/elmarco/src/qemu/qemu-char.c:2790 #5 0x5566dbef2d0b in tcp_chr_sync_read /home/elmarco/src/qemu/qemu-char.c:2835 #6 0x5566dbee8a99 in qemu_chr_fe_read_all /home/elmarco/src/qemu/qemu-char.c:295 #7 0x5566dc39b964 in net_vhost_user_watch net/vhost-user.c:180 #8 0x5566dc5a06c7 in qio_channel_fd_source_dispatch io/channel-watch.c:70 #9 0x7f6b1aa2ab87 in g_main_dispatch /home/elmarco/src/gnome/glib/glib/gmain.c:3154 #10 0x7f6b1aa2b9cb in g_main_context_dispatch /home/elmarco/src/gnome/glib/glib/gmain.c:3769 #11 0x5566dc475ed4 in glib_pollfds_poll /home/elmarco/src/qemu/main-loop.c:212 #12 0x5566dc476029 in os_host_main_loop_wait /home/elmarco/src/qemu/main-loop.c:257 #13 0x5566dc476165 in main_loop_wait /home/elmarco/src/qemu/main-loop.c:505 #14 0x5566dbf08d31 in main_loop /home/elmarco/src/qemu/vl.c:1932 #15 0x5566dbf16783 in main /home/elmarco/src/qemu/vl.c:4646 #16 0x7f6b180bb57f in __libc_start_main (/lib64/libc.so.6+0x2057f) #17 0x5566dbbf5348 in _start (/home/elmarco/src/qemu/x86_64-softmmu/qemu-system-x86_64+0x3f9348) 0x602000000990 is located 0 bytes inside of 5-byte region [0x602000000990,0x602000000995) freed by thread T0 here: #0 0x7f6b2521666a in __interceptor_free (/lib64/libasan.so.2+0x9866a) #1 0x7f6b1aa332a4 in g_free /home/elmarco/src/gnome/glib/glib/gmem.c:189 #2 0x5566dc5f416f in qapi_dealloc_type_str qapi/qapi-dealloc-visitor.c:134 #3 0x5566dc5f3268 in visit_type_str qapi/qapi-visit-core.c:196 #4 0x5566dc5ced58 in visit_type_Netdev_fields /home/elmarco/src/qemu/qapi-visit.c:5936 #5 0x5566dc5cef71 in visit_type_Netdev /home/elmarco/src/qemu/qapi-visit.c:5960 #6 0x5566dc381a8d in net_visit net/net.c:1049 #7 0x5566dc381c37 in net_client_init net/net.c:1076 #8 0x5566dc3839e2 in net_init_netdev net/net.c:1473 #9 0x5566dc63cc0a in qemu_opts_foreach util/qemu-option.c:1112 #10 0x5566dc383b36 in net_init_clients net/net.c:1499 #11 0x5566dbf15d86 in main /home/elmarco/src/qemu/vl.c:4397 #12 0x7f6b180bb57f in __libc_start_main (/lib64/libc.so.6+0x2057f) Signed-off-by: Marc-André Lureau <[email protected]> Reviewed-by: Michael S. Tsirkin <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
Any news/deadline/beta regarding the M4 FPU support? I wish I could help, but I am not experienced enough in this. |
unfortunately no clear date set for it. |
Alright, I will stay tuned then :-) |
for those interested, I repeat some details: the FPU code is already there. what is missing is the correct exception processing to do the lazy mode. unfortunately the exception processing code in QEMU is not properly implemented, the NVIC is piggy backed on top of some larger ARM interrupt controller (GIC), and the implementation is far from appropriate. I plan to reimplement NVIC from scratch, to remove dependencies from GIC, and with this the correct behaviour for all families will be considered, including FPU lazy mode, but this is not easy and I currently have other priorities. |
I think for now you should just not do the lazy mode-- just unconditionally preserve all registers. It will have the correct outcome for almost all code (just using a few bytes more irq stack; unless it inspects its own stack in pretty deep ways)-- and can be enhanced to be more correct later. |
I don't know, it might be so, but I need to analyse the issue thoroughly. |
Running cpuid instructions with a simple run like: i386-linux-user/qemu-i386 tests/tcg/sha1-i386 Results in the following assert: #0 0x00007ffff64246f5 in raise () from /lib64/libc.so.6 #1 0x00007ffff64262fa in abort () from /lib64/libc.so.6 #2 0x00007ffff7937ec5 in g_assertion_message () from /lib64/libglib-2.0.so.0 #3 0x00007ffff7937f5a in g_assertion_message_expr () from /lib64/libglib-2.0.so.0 #4 0x000055555561b54c in apicid_bitwidth_for_count (count=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:58 #5 0x000055555561b58a in apicid_smt_width (nr_cores=0, nr_threads=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:67 #6 0x000055555561b5c3 in apicid_core_offset (nr_cores=0, nr_threads=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:82 #7 0x000055555561b5e3 in apicid_pkg_offset (nr_cores=0, nr_threads=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:89 #8 0x000055555561dd86 in cpu_x86_cpuid (env=0x555557999550, index=4, count=3, eax=0x7fffffffcae8, ebx=0x7fffffffcaec, ecx=0x7fffffffcaf0, edx=0x7fffffffcaf4) at /home/elmarco/src/qemu/target-i386/cpu.c:2405 #9 0x0000555555638e8e in helper_cpuid (env=0x555557999550) at /home/elmarco/src/qemu/target-i386/misc_helper.c:106 #10 0x000055555599dc5e in static_code_gen_buffer () #11 0x00005555555952f8 in cpu_tb_exec (cpu=0x5555579912d0, itb=0x7ffff4371ab0) at /home/elmarco/src/qemu/cpu-exec.c:166 #12 0x0000555555595c8e in cpu_loop_exec_tb (cpu=0x5555579912d0, tb=0x7ffff4371ab0, last_tb=0x7fffffffd088, tb_exit=0x7fffffffd084, sc=0x7fffffffd0a0) at /home/elmarco/src/qemu/cpu-exec.c:517 #13 0x0000555555595e50 in cpu_exec (cpu=0x5555579912d0) at /home/elmarco/src/qemu/cpu-exec.c:612 #14 0x00005555555c065b in cpu_loop (env=0x555557999550) at /home/elmarco/src/qemu/linux-user/main.c:297 #15 0x00005555555c25b2 in main (argc=2, argv=0x7fffffffd848, envp=0x7fffffffd860) at /home/elmarco/src/qemu/linux-user/main.c:4803 The fields are set in qemu_init_vcpu() with softmmu, but it's a stub with linux-user. Signed-off-by: Marc-André Lureau <[email protected]> Reviewed-by: Eduardo Habkost <[email protected]> Signed-off-by: Eduardo Habkost <[email protected]>
Segfault happens when leaving qemu with msmouse backend: #0 0x00007fa8526ac975 in raise () at /lib64/libc.so.6 #1 0x00007fa8526add8a in abort () at /lib64/libc.so.6 #2 0x0000558be78846ab in error_exit (err=16, msg=0x558be799da10 ... #3 0x0000558be7884717 in qemu_mutex_destroy (mutex=0x558be93be750) at ... #4 0x0000558be7549951 in qemu_chr_free_common (chr=0x558be93be750) at ... #5 0x0000558be754999c in qemu_chr_free (chr=0x558be93be750) at ... #6 0x0000558be7549a20 in qemu_chr_delete (chr=0x558be93be750) at ... #7 0x0000558be754a8ef in qemu_chr_cleanup () at qemu-char.c:4643 #8 0x0000558be755843e in main (argc=5, argv=0x7ffe925d7118, ... The chr was freed by msmouse close callback before chardev cleanup, Then qemu_mutex_destroy triggered raise(). Because freeing chr is handled by qemu_chr_free_common, Remove the free from msmouse_chr_close to avoid double free. Fixes: c1111a2 Cc: [email protected] Signed-off-by: Lin Ma <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
Commit 0ed93d8 ("linux-aio: process completions from ioq_submit()") added an optimization that processes completions each time ioq_submit() returns with requests in flight. This commit introduces a "Co-routine re-entered recursively" error which can be triggered with -drive format=qcow2,aio=native. Fam Zheng <[email protected]>, Kevin Wolf <[email protected]>, and I debugged the following backtrace: (gdb) bt #0 0x00007ffff0a046f5 in raise () at /lib64/libc.so.6 #1 0x00007ffff0a062fa in abort () at /lib64/libc.so.6 #2 0x0000555555ac0013 in qemu_coroutine_enter (co=0x5555583464d0) at util/qemu-coroutine.c:113 #3 0x0000555555a4b663 in qemu_laio_process_completions (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:218 #4 0x0000555555a4b874 in ioq_submit (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:331 #5 0x0000555555a4ba12 in laio_do_submit (fd=fd@entry=13, laiocb=laiocb@entry=0x555559d38ae0, offset=offset@entry=2932727808, type=type@entry=1) at block/linux-aio.c:383 #6 0x0000555555a4bbd3 in laio_co_submit (bs=<optimized out>, s=0x555557e2f7f0, fd=13, offset=2932727808, qiov=0x555559d38e20, type=1) at block/linux-aio.c:402 #7 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x55555663bcb0, offset=offset@entry=2932727808, bytes=bytes@entry=8192, qiov=qiov@entry=0x555559d38e20, flags=0) at block/io.c:804 #8 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x55555663bcb0, req=req@entry=0x555559d38d20, offset=offset@entry=2932727808, bytes=bytes@entry=8192, align=align@entry=512, qiov=qiov@entry=0x555559d38e20, flags=0) at block/io.c:1041 #9 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=2932727808, bytes=8192, qiov=qiov@entry=0x555559d38e20, flags=flags@entry=0) at block/io.c:1133 #10 0x0000555555a29629 in qcow2_co_preadv (bs=0x555556635890, offset=6178725888, bytes=8192, qiov=0x555557527840, flags=<optimized out>) at block/qcow2.c:1509 #11 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x555556635890, offset=offset@entry=6178725888, bytes=bytes@entry=8192, qiov=qiov@entry=0x555557527840, flags=0) at block/io.c:804 #12 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x555556635890, req=req@entry=0x555559d39000, offset=offset@entry=6178725888, bytes=bytes@entry=8192, align=align@entry=1, qiov=qiov@entry=0x555557527840, flags=0) at block/io.c:1041 #13 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=offset@entry=6178725888, bytes=bytes@entry=8192, qiov=qiov@entry=0x555557527840, flags=flags@entry=0) at block/io.c:1133 #14 0x0000555555a4515a in blk_co_preadv (blk=0x5555566356d0, offset=6178725888, bytes=8192, qiov=0x555557527840, flags=0) at block/block-backend.c:783 #15 0x0000555555a45266 in blk_aio_read_entry (opaque=0x5555577025e0) at block/block-backend.c:991 #16 0x0000555555ac0cfa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:78 It turned out that re-entrant ioq_submit() and completion processing between three requests caused this error. The following check is not sufficient to prevent recursively entering coroutines: if (laiocb->co != qemu_coroutine_self()) { qemu_coroutine_enter(laiocb->co); } As the following coroutine backtrace shows, not just the current coroutine (self) can be entered. There might also be other coroutines that are currently entered and transferred control due to the qcow2 lock (CoMutex): (gdb) qemu coroutine 0x5555583464d0 #0 0x0000555555ac0c90 in qemu_coroutine_switch (from_=from_@entry=0x5555583464d0, to_=to_@entry=0x5555572f9890, action=action@entry=COROUTINE_ENTER) at util/coroutine-ucontext.c:175 #1 0x0000555555abfe54 in qemu_coroutine_enter (co=0x5555572f9890) at util/qemu-coroutine.c:117 #2 0x0000555555ac031c in qemu_co_queue_run_restart (co=co@entry=0x5555583462c0) at util/qemu-coroutine-lock.c:60 #3 0x0000555555abfe5e in qemu_coroutine_enter (co=0x5555583462c0) at util/qemu-coroutine.c:119 #4 0x0000555555a4b663 in qemu_laio_process_completions (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:218 #5 0x0000555555a4b874 in ioq_submit (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:331 #6 0x0000555555a4ba12 in laio_do_submit (fd=fd@entry=13, laiocb=laiocb@entry=0x55555a338b40, offset=offset@entry=2911477760, type=type@entry=1) at block/linux-aio.c:383 #7 0x0000555555a4bbd3 in laio_co_submit (bs=<optimized out>, s=0x555557e2f7f0, fd=13, offset=2911477760, qiov=0x55555a338e80, type=1) at block/linux-aio.c:402 #8 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x55555663bcb0, offset=offset@entry=2911477760, bytes=bytes@entry=8192, qiov=qiov@entry=0x55555a338e80, flags=0) at block/io.c:804 #9 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x55555663bcb0, req=req@entry=0x55555a338d80, offset=offset@entry=2911477760, bytes=bytes@entry=8192, align=align@entry=512, qiov=qiov@entry=0x55555a338e80, flags=0) at block/io.c:1041 #10 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=2911477760, bytes=8192, qiov=qiov@entry=0x55555a338e80, flags=flags@entry=0) at block/io.c:1133 #11 0x0000555555a29629 in qcow2_co_preadv (bs=0x555556635890, offset=6157475840, bytes=8192, qiov=0x5555575df720, flags=<optimized out>) at block/qcow2.c:1509 #12 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x555556635890, offset=offset@entry=6157475840, bytes=bytes@entry=8192, qiov=qiov@entry=0x5555575df720, flags=0) at block/io.c:804 #13 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x555556635890, req=req@entry=0x55555a339060, offset=offset@entry=6157475840, bytes=bytes@entry=8192, align=align@entry=1, qiov=qiov@entry=0x5555575df720, flags=0) at block/io.c:1041 #14 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=offset@entry=6157475840, bytes=bytes@entry=8192, qiov=qiov@entry=0x5555575df720, flags=flags@entry=0) at block/io.c:1133 #15 0x0000555555a4515a in blk_co_preadv (blk=0x5555566356d0, offset=6157475840, bytes=8192, qiov=0x5555575df720, flags=0) at block/block-backend.c:783 #16 0x0000555555a45266 in blk_aio_read_entry (opaque=0x555557231aa0) at block/block-backend.c:991 #17 0x0000555555ac0cfa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:78 Use the new qemu_coroutine_entered() function instead of comparing against qemu_coroutine_self(). This is correct because: 1. If a coroutine is not entered then it must have yielded to wait for I/O completion. It is therefore safe to enter. 2. If a coroutine is entered then it must be in ioq_submit()/qemu_laio_process_completions() because otherwise it would be yielded while waiting for I/O completion. Therefore it will check laio->ret and return from ioq_submit() instead of yielding, i.e. it's guaranteed not to hang. Reported-by: Fam Zheng <[email protected]> Tested-by: Fam Zheng <[email protected]> Signed-off-by: Stefan Hajnoczi <[email protected]> Reviewed-by: Fam Zheng <[email protected]> Message-id: [email protected] Signed-off-by: Stefan Hajnoczi <[email protected]>
The vhost-user & colo code is poking at the QemuOpts instance in the CharDriverState struct, not realizing that it is valid for this to be NULL. e.g. the following crash shows a codepath where it will be NULL: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x000055baf6ab4adc in qemu_opt_foreach (opts=0x0, func=0x55baf696b650 <net_vhost_chardev_opts>, opaque=0x7ffc51368c00, errp=0x7ffc51368e48) at util/qemu-option.c:617 617 QTAILQ_FOREACH(opt, &opts->head, next) { [Current thread is 1 (Thread 0x7f1d4970bb40 (LWP 6603))] (gdb) bt #0 0x000055baf6ab4adc in qemu_opt_foreach (opts=0x0, func=0x55baf696b650 <net_vhost_chardev_opts>, opaque=0x7ffc51368c00, errp=0x7ffc51368e48) at util/qemu-option.c:617 #1 0x000055baf696b7da in net_vhost_parse_chardev (opts=0x55baf8ff9260, errp=0x7ffc51368e48) at net/vhost-user.c:314 #2 0x000055baf696b985 in net_init_vhost_user (netdev=0x55baf8ff9250, name=0x55baf879d270 "hostnet2", peer=0x0, errp=0x7ffc51368e48) at net/vhost-user.c:360 #3 0x000055baf6960216 in net_client_init1 (object=0x55baf8ff9250, is_netdev=true, errp=0x7ffc51368e48) at net/net.c:1051 #4 0x000055baf6960518 in net_client_init (opts=0x55baf776e7e0, is_netdev=true, errp=0x7ffc51368f00) at net/net.c:1108 #5 0x000055baf696083f in netdev_add (opts=0x55baf776e7e0, errp=0x7ffc51368f00) at net/net.c:1186 #6 0x000055baf69608c7 in qmp_netdev_add (qdict=0x55baf7afaf60, ret=0x7ffc51368f50, errp=0x7ffc51368f48) at net/net.c:1205 #7 0x000055baf6622135 in handle_qmp_command (parser=0x55baf77fb590, tokens=0x7f1d24011960) at /path/to/qemu.git/monitor.c:3978 #8 0x000055baf6a9d099 in json_message_process_token (lexer=0x55baf77fb598, input=0x55baf75acd20, type=JSON_RCURLY, x=113, y=19) at qobject/json-streamer.c:105 #9 0x000055baf6abf7aa in json_lexer_feed_char (lexer=0x55baf77fb598, ch=125 '}', flush=false) at qobject/json-lexer.c:319 #10 0x000055baf6abf8f2 in json_lexer_feed (lexer=0x55baf77fb598, buffer=0x7ffc51369170 "}R\204\367\272U", size=1) at qobject/json-lexer.c:369 #11 0x000055baf6a9d13c in json_message_parser_feed (parser=0x55baf77fb590, buffer=0x7ffc51369170 "}R\204\367\272U", size=1) at qobject/json-streamer.c:124 #12 0x000055baf66221f7 in monitor_qmp_read (opaque=0x55baf77fb530, buf=0x7ffc51369170 "}R\204\367\272U", size=1) at /path/to/qemu.git/monitor.c:3994 #13 0x000055baf6757014 in qemu_chr_be_write_impl (s=0x55baf7610a40, buf=0x7ffc51369170 "}R\204\367\272U", len=1) at qemu-char.c:387 #14 0x000055baf6757076 in qemu_chr_be_write (s=0x55baf7610a40, buf=0x7ffc51369170 "}R\204\367\272U", len=1) at qemu-char.c:399 #15 0x000055baf675b3b0 in tcp_chr_read (chan=0x55baf90244b0, cond=G_IO_IN, opaque=0x55baf7610a40) at qemu-char.c:2927 #16 0x000055baf6a5d655 in qio_channel_fd_source_dispatch (source=0x55baf7610df0, callback=0x55baf675b25a <tcp_chr_read>, user_data=0x55baf7610a40) at io/channel-watch.c:84 #17 0x00007f1d3e80cbbd in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0 #18 0x000055baf69d3720 in glib_pollfds_poll () at main-loop.c:213 #19 0x000055baf69d37fd in os_host_main_loop_wait (timeout=126000000) at main-loop.c:258 #20 0x000055baf69d38ad in main_loop_wait (nonblocking=0) at main-loop.c:506 #21 0x000055baf676587b in main_loop () at vl.c:1908 #22 0x000055baf676d3bf in main (argc=101, argv=0x7ffc5136a6c8, envp=0x7ffc5136a9f8) at vl.c:4604 (gdb) p opts $1 = (QemuOpts *) 0x0 The crash occurred when attaching vhost-user net via QMP: { "execute": "chardev-add", "arguments": { "id": "charnet2", "backend": { "type": "socket", "data": { "addr": { "type": "unix", "data": { "path": "/var/run/openvswitch/vhost-user1" } }, "wait": false, "server": false } } }, "id": "libvirt-19" } { "return": { }, "id": "libvirt-19" } { "execute": "netdev_add", "arguments": { "type": "vhost-user", "chardev": "charnet2", "id": "hostnet2" }, "id": "libvirt-20" } Code using chardevs should not be poking at the internals of the CharDriverState struct. What vhost-user wants is a chardev that is operating as reconnectable network service, along with the ability to do FD passing over the connection. The colo code simply wants a network service. Add a feature concept to the char drivers so that chardev users can query the actual features they wish to have supported. The QemuOpts member is removed to prevent future mistakes in this area. Signed-off-by: Daniel P. Berrange <[email protected]> Reviewed-by: Marc-André Lureau <[email protected]> Reviewed-by: Michael S. Tsirkin <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
It turns out qemu is calling exit() in various places from various threads without taking much care of resources state. The atexit() cleanup handlers cannot easily destroy resources that are in use (by the same thread or other). Since c1111a2, TCG arm guests run into the following abort() when running tests, the chardev mutex is locked during the write, so qemu_mutex_destroy() returns an error: #0 0x00007fffdbb806f5 in raise () at /lib64/libc.so.6 #1 0x00007fffdbb822fa in abort () at /lib64/libc.so.6 #2 0x00005555557616fe in error_exit (err=<optimized out>, msg=msg@entry=0x555555c38c30 <__func__.14622> "qemu_mutex_destroy") at /home/drjones/code/qemu/util/qemu-thread-posix.c:39 #3 0x0000555555b0be20 in qemu_mutex_destroy (mutex=mutex@entry=0x5555566aa0e0) at /home/drjones/code/qemu/util/qemu-thread-posix.c:57 #4 0x00005555558aab00 in qemu_chr_free_common (chr=0x5555566aa0e0) at /home/drjones/code/qemu/qemu-char.c:4029 #5 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4038 #6 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4044 #7 0x00005555558b062c in qemu_chr_cleanup () at /home/drjones/code/qemu/qemu-char.c:4557 #8 0x00007fffdbb851e8 in __run_exit_handlers () at /lib64/libc.so.6 #9 0x00007fffdbb85235 in () at /lib64/libc.so.6 #10 0x00005555558d1b39 in testdev_write (testdev=0x5555566aa0a0) at /home/drjones/code/qemu/backends/testdev.c:71 #11 0x00005555558d1b39 in testdev_write (chr=<optimized out>, buf=0x7fffc343fd9a "", len=0) at /home/drjones/code/qemu/backends/testdev.c:95 #12 0x00005555558adced in qemu_chr_fe_write (s=0x5555566aa0e0, buf=buf@entry=0x7fffc343fd98 "0q", len=len@entry=2) at /home/drjones/code/qemu/qemu-char.c:282 Instead of using a atexit() handler, only run the chardev cleanup as initially proposed at the end of main(), where there are less chances (hic) of conflicts or other races. Signed-off-by: Marc-André Lureau <[email protected]> Reported-by: Andrew Jones <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
When VNC is started with '-vnc none' there will be no listener socket present. When we try to populate the VncServerInfo we'll crash accessing a NULL 'lsock' field. #0 qio_channel_socket_get_local_address (ioc=0x0, errp=errp@entry=0x7ffd5b8aa0f0) at io/channel-socket.c:33 #1 0x00007f4b9a297d6f in vnc_init_basic_info_from_server_addr (errp=0x7ffd5b8aa0f0, info=0x7f4b9d425460, ioc=<optimized out>) at ui/vnc.c:146 #2 vnc_server_info_get (vd=0x7f4b9e858000) at ui/vnc.c:223 #3 0x00007f4b9a29d318 in vnc_qmp_event (vs=0x7f4b9ef82000, vs=0x7f4b9ef82000, event=QAPI_EVENT_VNC_CONNECTED) at ui/vnc.c:279 #4 vnc_connect (vd=vd@entry=0x7f4b9e858000, sioc=sioc@entry=0x7f4b9e8b3a20, skipauth=skipauth@entry=true, websocket=websocket @entry=false) at ui/vnc.c:2994 #5 0x00007f4b9a29e8c8 in vnc_display_add_client (id=<optimized out>, csock=<optimized out>, skipauth=<optimized out>) at ui/v nc.c:3825 #6 0x00007f4b9a18d8a1 in qmp_marshal_add_client (args=<optimized out>, ret=<optimized out>, errp=0x7ffd5b8aa230) at qmp-marsh al.c:123 #7 0x00007f4b9a0b53f5 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-2.6.0/mon itor.c:3922 #8 0x00007f4b9a348580 in json_message_process_token (lexer=0x7f4b9c78dfe8, input=0x7f4b9c7350e0, type=JSON_RCURLY, x=111, y=5 9) at qobject/json-streamer.c:94 #9 0x00007f4b9a35cfeb in json_lexer_feed_char (lexer=lexer@entry=0x7f4b9c78dfe8, ch=125 '}', flush=flush@entry=false) at qobj ect/json-lexer.c:310 #10 0x00007f4b9a35d0ae in json_lexer_feed (lexer=0x7f4b9c78dfe8, buffer=<optimized out>, size=<optimized out>) at qobject/json -lexer.c:360 #11 0x00007f4b9a348679 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>) at q object/json-streamer.c:114 #12 0x00007f4b9a0b3a1b in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/deb ug/qemu-2.6.0/monitor.c:3938 #13 0x00007f4b9a186751 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x7f4b9c7add40) at qemu-char.c:2895 #14 0x00007f4b92b5c79a in g_main_context_dispatch () from /lib64/libglib-2.0.so.0 #15 0x00007f4b9a2bb0c0 in glib_pollfds_poll () at main-loop.c:213 #16 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:258 #17 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:506 #18 0x00007f4b9a0835cf in main_loop () at vl.c:1934 #19 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4667 Do an upfront check for a NULL lsock and report an error to the caller, which matches behaviour from before commit 04d2529 Author: Daniel P. Berrange <[email protected]> Date: Fri Feb 27 16:20:57 2015 +0000 ui: convert VNC server to use QIOChannelSocket where getsockname() would be given a FD value -1 and thus report an error to the caller. Signed-off-by: Daniel P. Berrange <[email protected]> Message-id: [email protected] Signed-off-by: Gerd Hoffmann <[email protected]>
The vnc_server_info_get will allocate the VncServerInfo struct and then call vnc_init_basic_info_from_server_addr to populate the basic fields. If this returns an error though, the qapi_free_VncServerInfo call will then crash because the VncServerInfo struct instance was not properly NULL-initialized and thus contains random stack garbage. #0 0x00007f1987c8e6f5 in raise () at /lib64/libc.so.6 #1 0x00007f1987c902fa in abort () at /lib64/libc.so.6 #2 0x00007f1987ccf600 in __libc_message () at /lib64/libc.so.6 #3 0x00007f1987cd7d4a in _int_free () at /lib64/libc.so.6 #4 0x00007f1987cdb2ac in free () at /lib64/libc.so.6 #5 0x00007f198b654f6e in g_free () at /lib64/libglib-2.0.so.0 #6 0x0000559193cdcf54 in visit_type_str (v=v@entry= 0x5591972f14b0, name=name@entry=0x559193de1e29 "host", obj=obj@entry=0x5591961dbfa0, errp=errp@entry=0x7fffd7899d80) at qapi/qapi-visit-core.c:255 #7 0x0000559193cca8f3 in visit_type_VncBasicInfo_members (v=v@entry= 0x5591972f14b0, obj=obj@entry=0x5591961dbfa0, errp=errp@entry=0x7fffd7899dc0) at qapi-visit.c:12307 #8 0x0000559193ccb523 in visit_type_VncServerInfo_members (v=v@entry= 0x5591972f14b0, obj=0x5591961dbfa0, errp=errp@entry=0x7fffd7899e00) at qapi-visit.c:12632 #9 0x0000559193ccb60b in visit_type_VncServerInfo (v=v@entry= 0x5591972f14b0, name=name@entry=0x0, obj=obj@entry=0x7fffd7899e48, errp=errp@entry=0x0) at qapi-visit.c:12658 #10 0x0000559193cb53d8 in qapi_free_VncServerInfo (obj=<optimized out>) at qapi-types.c:3970 #11 0x0000559193c1e6ba in vnc_server_info_get (vd=0x7f1951498010) at ui/vnc.c:233 #12 0x0000559193c24275 in vnc_connect (vs=0x559197b2f200, vs=0x559197b2f200, event=QAPI_EVENT_VNC_CONNECTED) at ui/vnc.c:284 #13 0x0000559193c24275 in vnc_connect (vd=vd@entry=0x7f1951498010, sioc=sioc@entry=0x559196bf9c00, skipauth=skipauth@entry=tru e, websocket=websocket@entry=false) at ui/vnc.c:3039 #14 0x0000559193c25806 in vnc_display_add_client (id=<optimized out>, csock=<optimized out>, skipauth=<optimized out>) at ui/vnc.c:3877 #15 0x0000559193a90c28 in qmp_marshal_add_client (args=<optimized out>, ret=<optimized out>, errp=0x7fffd7899f90) at qmp-marshal.c:105 #16 0x000055919399c2b7 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /home/berrange/src/virt/qemu/monitor.c:3971 #17 0x0000559193ce3307 in json_message_process_token (lexer=0x559194ab0838, input=0x559194a6d940, type=JSON_RCURLY, x=111, y=1 2) at qobject/json-streamer.c:105 #18 0x0000559193cfa90d in json_lexer_feed_char (lexer=lexer@entry=0x559194ab0838, ch=125 '}', flush=flush@entry=false) at qobject/json-lexer.c:319 #19 0x0000559193cfaa1e in json_lexer_feed (lexer=0x559194ab0838, buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:369 #20 0x0000559193ce33c9 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>) at qobject/json-streamer.c:124 #21 0x000055919399a85b in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /home/berrange/src/virt/qemu/monitor.c:3987 #22 0x0000559193a87d00 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x559194a7d900) at qemu-char.c:2895 #23 0x00007f198b64f703 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #24 0x0000559193c484b3 in main_loop_wait () at main-loop.c:213 #25 0x0000559193c484b3 in main_loop_wait (timeout=<optimized out>) at main-loop.c:258 #26 0x0000559193c484b3 in main_loop_wait (nonblocking=<optimized out>) at main-loop.c:506 #27 0x0000559193964c55 in main () at vl.c:1908 #28 0x0000559193964c55 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4603 This was introduced in commit 98481bf Author: Eric Blake <[email protected]> Date: Mon Oct 26 16:34:45 2015 -0600 vnc: Hoist allocation of VncBasicInfo to callers which added error reporting for vnc_init_basic_info_from_server_addr but didn't change the g_malloc calls to g_malloc0. Signed-off-by: Daniel P. Berrange <[email protected]> Message-id: [email protected] Signed-off-by: Gerd Hoffmann <[email protected]>
ahci-test /x86_64/ahci/io/dma/lba28/retry triggers the following leak: Direct leak of 16 byte(s) in 1 object(s) allocated from: #0 0x7fc4b2a25e20 in malloc (/lib64/libasan.so.3+0xc6e20) #1 0x7fc4993bce58 in g_malloc (/lib64/libglib-2.0.so.0+0x4ee58) #2 0x556a187d4b34 in ahci_populate_sglist hw/ide/ahci.c:896 #3 0x556a187d8237 in ahci_dma_prepare_buf hw/ide/ahci.c:1367 #4 0x556a187b5a1a in ide_dma_cb hw/ide/core.c:844 #5 0x556a187d7eec in ahci_start_dma hw/ide/ahci.c:1333 #6 0x556a187b650b in ide_start_dma hw/ide/core.c:921 #7 0x556a187b61e6 in ide_sector_start_dma hw/ide/core.c:911 #8 0x556a187b9e26 in cmd_write_dma hw/ide/core.c:1486 #9 0x556a187bd519 in ide_exec_cmd hw/ide/core.c:2027 #10 0x556a187d71c5 in handle_reg_h2d_fis hw/ide/ahci.c:1204 #11 0x556a187d7681 in handle_cmd hw/ide/ahci.c:1254 #12 0x556a187d168a in check_cmd hw/ide/ahci.c:510 #13 0x556a187d0afc in ahci_port_write hw/ide/ahci.c:314 #14 0x556a187d105d in ahci_mem_write hw/ide/ahci.c:435 #15 0x556a1831d959 in memory_region_write_accessor /home/elmarco/src/qemu/memory.c:525 #16 0x556a1831dc35 in access_with_adjusted_size /home/elmarco/src/qemu/memory.c:591 #17 0x556a18323ce3 in memory_region_dispatch_write /home/elmarco/src/qemu/memory.c:1262 #18 0x556a1828cf67 in address_space_write_continue /home/elmarco/src/qemu/exec.c:2578 #19 0x556a1828d20b in address_space_write /home/elmarco/src/qemu/exec.c:2635 #20 0x556a1828d92b in address_space_rw /home/elmarco/src/qemu/exec.c:2737 #21 0x556a1828daf7 in cpu_physical_memory_rw /home/elmarco/src/qemu/exec.c:2746 #22 0x556a183068d3 in cpu_physical_memory_write /home/elmarco/src/qemu/include/exec/cpu-common.h:72 #23 0x556a18308194 in qtest_process_command /home/elmarco/src/qemu/qtest.c:382 #24 0x556a18309999 in qtest_process_inbuf /home/elmarco/src/qemu/qtest.c:573 #25 0x556a18309a4a in qtest_read /home/elmarco/src/qemu/qtest.c:585 #26 0x556a18598b85 in qemu_chr_be_write_impl /home/elmarco/src/qemu/qemu-char.c:387 #27 0x556a18598c52 in qemu_chr_be_write /home/elmarco/src/qemu/qemu-char.c:399 #28 0x556a185a2afa in tcp_chr_read /home/elmarco/src/qemu/qemu-char.c:2902 #29 0x556a18cbaf52 in qio_channel_fd_source_dispatch io/channel-watch.c:84 Follow John Snow recommendation: Everywhere else ncq_err is used, it is accompanied by a list cleanup except for ncq_cb, which is the case you are fixing here. Move the sglist destruction inside of ncq_err and then delete it from the other two locations to keep it tidy. Call dma_buf_commit in ide_dma_cb after the early return. Though, this is also a little wonky because this routine does more than clear the list, but it is at the moment the centralized "we're done with the sglist" function and none of the other side effects that occur in dma_buf_commit will interfere with the reset that occurs from ide_restart_bh, I think Signed-off-by: Marc-André Lureau <[email protected]> Reviewed-by: John Snow <[email protected]>
Since aa5cb7f, the chardevs are being cleaned up when leaving qemu. However, the monitor has still references to them, which may lead to crashes when running atexit() and trying to send monitor events: #0 0x00007fffdb18f6f5 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x00007fffdb1912fa in __GI_abort () at abort.c:89 #2 0x0000555555c263e7 in error_exit (err=22, msg=0x555555d47980 <__func__.13537> "qemu_mutex_lock") at util/qemu-thread-posix.c:39 #3 0x0000555555c26488 in qemu_mutex_lock (mutex=0x5555567a2420) at util/qemu-thread-posix.c:66 #4 0x00005555558c52db in qemu_chr_fe_write (s=0x5555567a2420, buf=0x55555740dc40 "{\"timestamp\": {\"seconds\": 1470041716, \"microseconds\": 989699}, \"event\": \"SPICE_DISCONNECTED\", \"data\": {\"server\": {\"port\": \"5900\", \"family\": \"ipv4\", \"host\": \"127.0.0.1\"}, \"client\": {\"port\": \"40272\", \"f"..., len=240) at qemu-char.c:280 #5 0x0000555555787cad in monitor_flush_locked (mon=0x5555567bd9e0) at /home/elmarco/src/qemu/monitor.c:311 #6 0x0000555555787e46 in monitor_puts (mon=0x5555567bd9e0, str=0x5555567a44ef "") at /home/elmarco/src/qemu/monitor.c:353 #7 0x00005555557880fe in monitor_json_emitter (mon=0x5555567bd9e0, data=0x5555567c73a0) at /home/elmarco/src/qemu/monitor.c:401 #8 0x00005555557882d2 in monitor_qapi_event_emit (event=QAPI_EVENT_SPICE_DISCONNECTED, qdict=0x5555567c73a0) at /home/elmarco/src/qemu/monitor.c:472 #9 0x000055555578838f in monitor_qapi_event_queue (event=QAPI_EVENT_SPICE_DISCONNECTED, qdict=0x5555567c73a0, errp=0x7fffffffca88) at /home/elmarco/src/qemu/monitor.c:497 #10 0x0000555555c15541 in qapi_event_send_spice_disconnected (server=0x5555571139d0, client=0x5555570d0db0, errp=0x5555566c0428 <error_abort>) at qapi-event.c:1038 #11 0x0000555555b11bc6 in channel_event (event=3, info=0x5555570d6c00) at ui/spice-core.c:248 #12 0x00007fffdcc9983a in adapter_channel_event (event=3, info=0x5555570d6c00) at reds.c:120 #13 0x00007fffdcc99a25 in reds_handle_channel_event (reds=0x5555567a9d60, event=3, info=0x5555570d6c00) at reds.c:324 #14 0x00007fffdcc7d4c4 in main_dispatcher_self_handle_channel_event (self=0x5555567b28b0, event=3, info=0x5555570d6c00) at main-dispatcher.c:175 #15 0x00007fffdcc7d5b1 in main_dispatcher_channel_event (self=0x5555567b28b0, event=3, info=0x5555570d6c00) at main-dispatcher.c:194 #16 0x00007fffdcca7674 in reds_stream_push_channel_event (s=0x5555570d9910, event=3) at reds-stream.c:354 #17 0x00007fffdcca749b in reds_stream_free (s=0x5555570d9910) at reds-stream.c:323 #18 0x00007fffdccb5dad in snd_disconnect_channel (channel=0x5555576a89a0) at sound.c:229 #19 0x00007fffdccb9e57 in snd_detach_common (worker=0x555557739720) at sound.c:1589 #20 0x00007fffdccb9f0e in snd_detach_playback (sin=0x5555569fe3f8) at sound.c:1602 #21 0x00007fffdcca3373 in spice_server_remove_interface (sin=0x5555569fe3f8) at reds.c:3387 #22 0x00005555558ff6e2 in line_out_fini (hw=0x5555569fe370) at audio/spiceaudio.c:152 #23 0x00005555558f909e in audio_atexit () at audio/audio.c:1754 #24 0x00007fffdb1941e8 in __run_exit_handlers (status=0, listp=0x7fffdb5175d8 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true) at exit.c:82 #25 0x00007fffdb194235 in __GI_exit (status=<optimized out>) at exit.c:104 #26 0x00007fffdb17b738 in __libc_start_main (main=0x5555558d7874 <main>, argc=67, argv=0x7fffffffcf48, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffcf38) at ../csu/libc-start.c:323 Add a monitor_cleanup() functions to remove all the monitors before cleaning up the chardev. Note that we are "losing" some events that used to be sent during atexit(). Signed-off-by: Marc-André Lureau <[email protected]> Message-Id: <[email protected]> Reviewed-by: Paolo Bonzini <[email protected]> Reviewed-by: Markus Armbruster <[email protected]> Signed-off-by: Markus Armbruster <[email protected]>
ASAN complains about buffer overflow when running: aarch64-softmmu/qemu-system-aarch64 -machine xilinx-zynq-a9 ==476==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000035e38 at pc 0x000000f75253 bp 0x7ffc597e0ec0 sp 0x7ffc597e0eb0 READ of size 8 at 0x602000035e38 thread T0 #0 0xf75252 in xilinx_spips_realize hw/ssi/xilinx_spips.c:623 #1 0xb9ef6c in device_set_realized hw/core/qdev.c:918 #2 0x129ae01 in property_set_bool qom/object.c:1854 #3 0x1296e70 in object_property_set qom/object.c:1088 #4 0x129dd1b in object_property_set_qobject qom/qom-qobject.c:27 #5 0x1297168 in object_property_set_bool qom/object.c:1157 #6 0xb9aeac in qdev_init_nofail hw/core/qdev.c:358 #7 0x78a5bf in zynq_init_spi_flashes /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:125 #8 0x78af60 in zynq_init /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:238 #9 0x998eac in main /home/elmarco/src/qemu/vl.c:4534 #10 0x7f96ed692730 in __libc_start_main (/lib64/libc.so.6+0x20730) #11 0x41d0a8 in _start (/home/elmarco/src/qemu/aarch64-softmmu/qemu-system-aarch64+0x41d0a8) 0x602000035e38 is located 0 bytes to the right of 8-byte region [0x602000035e30,0x602000035e38) allocated by thread T0 here: #0 0x7f970b014e60 in malloc (/lib64/libasan.so.3+0xc6e60) #1 0x7f96f15b0e18 in g_malloc (/lib64/libglib-2.0.so.0+0x4ee18) #2 0xb9ef6c in device_set_realized hw/core/qdev.c:918 #3 0x129ae01 in property_set_bool qom/object.c:1854 #4 0x1296e70 in object_property_set qom/object.c:1088 #5 0x129dd1b in object_property_set_qobject qom/qom-qobject.c:27 #6 0x1297168 in object_property_set_bool qom/object.c:1157 #7 0xb9aeac in qdev_init_nofail hw/core/qdev.c:358 #8 0x78a5bf in zynq_init_spi_flashes /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:125 #9 0x78af60 in zynq_init /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:238 #10 0x998eac in main /home/elmarco/src/qemu/vl.c:4534 #11 0x7f96ed692730 in __libc_start_main (/lib64/libc.so.6+0x20730) s->spi is allocated with the size of num_busses which may be 1 (by default). Change to use a loop up to s->num_busses also for the call to ssi_auto_connect_slaves(). Reported-by: Marc-André Lureau <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
Qemu crash in the source side while migrating, after starting ipmi service inside vm. ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -smp 4 -m 4096 \ -drive file=/work/suse/suse11_sp3_64_vt,format=raw,if=none,id=drive-virtio-disk0,cache=none \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 \ -vnc :99 -monitor vc -device ipmi-bmc-sim,id=bmc0 -device isa-ipmi-kcs,bmc=bmc0,ioport=0xca2 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffec4268700 (LWP 7657)] __memcpy_ssse3_back () at ../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:2757 (gdb) bt #0 __memcpy_ssse3_back () at ../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:2757 #1 0x00005555559ef775 in memcpy (__len=3, __src=0xc1421c, __dest=<optimized out>) at /usr/include/bits/string3.h:51 #2 qemu_put_buffer (f=0x555557a97690, buf=0xc1421c <Address 0xc1421c out of bounds>, size=3) at migration/qemu-file.c:346 #3 0x00005555559eef66 in vmstate_save_state (f=f@entry=0x555557a97690, vmsd=0x555555f8a5a0 <vmstate_ISAIPMIKCSDevice>, opaque=0x555557231160, vmdesc=vmdesc@entry=0x55555798cc40) at migration/vmstate.c:333 #4 0x00005555557cfe45 in vmstate_save (f=f@entry=0x555557a97690, se=se@entry=0x555557231de0, vmdesc=vmdesc@entry=0x55555798cc40) at /mnt/sdb/zyy/qemu/migration/savevm.c:720 #5 0x00005555557d2be7 in qemu_savevm_state_complete_precopy (f=0x555557a97690, iterable_only=iterable_only@entry=false) at /mnt/sdb/zyy/qemu/migration/savevm.c:1128 #6 0x00005555559ea102 in migration_completion (start_time=<synthetic pointer>, old_vm_running=<synthetic pointer>, current_active_state=<optimized out>, s=0x5555560eaa80 <current_migration.44078>) at migration/migration.c:1707 #7 migration_thread (opaque=0x5555560eaa80 <current_migration.44078>) at migration/migration.c:1855 #8 0x00007ffff3900dc5 in start_thread (arg=0x7ffec4268700) at pthread_create.c:308 #9 0x00007fffefc6c71d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: Zhuang Yanying <[email protected]> Reviewed-by: Michael S. Tsirkin <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
Any news about the M4 FPU support? |
unless someone will decide to sponsor QEMU development, I'm afraid it'll take a while to get the FP support. |
It would be really nice if there was a workaround-- to output a sternly worded warning and to save all the FP registers. Almost all existing code would then work (main consequence would be a performance penalty). |
I think I can offer a bit of my time to do some programming. |
great! did you take a look at the qemu source code? |
I'm a newbie, What can I do about this subject? I don't have any experience in either 'qemu' or 'mcu'. |
well, first of all, continue learning. did you run the blinky tutorial till the end? did you see the graphical leds blinking? then take a look at qemu source code. |
Hey everyone. I see floating point on Cortex M3/M4 isn't supported. Apologies because I don't really know QEMU internals that well.
I know there's a number of things that need to be done infrastructure-wise to fully correctly support floating point. Probably most difficult is the need to delay storing FP state in interrupt handlers until a floating point instruction occurs, which it doesn't look like there's a good way to do in QEMU.
It seems we also would need to limit to single precision.
But could an early approach just be to unconditionally store floating point state in the lazy mode? I understand it's not absolutely correct and bad for performance, but pretty much all existing code would work. I currently am working on TauLabs, an open-source flight controller for multicopters, planes, and other flying vehicles, and we'd really like to get automated integration test together. However, on STM32F3 and STM32F4 platforms we heavily use hardware floating point. Because of upstream libraries we use, it's challenging to build with just soft-float.
The text was updated successfully, but these errors were encountered: