Bug 171 - Cannot run PD on trace capture in PV
Summary: Cannot run PD on trace capture in PV
Status: RESOLVED DUPLICATE of bug 225
Alias: None
Product: PulseView
Classification: Unclassified
Component: Acquisition (show other bugs)
Version: unreleased development snapshot
Hardware: All All
: Normal blocker
Target Milestone: PulseView 0.3.0
Assignee: Nobody
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-10-15 05:38 CEST by Matt Ranostay
Modified: 2014-01-23 17:34 CET (History)
2 users (show)



Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Matt Ranostay 2013-10-15 05:38:33 CEST
Capturing a trace from Saleae Logic 16 and running any protocol decoder either doesn't decoder or segfaults instantly.

Can only assume this is the case with other logic analyzers.
Comment 1 Joel Holdsworth 2013-10-16 18:55:45 CEST
Doesn't occur with the demo device, though.
Comment 2 Matt Ranostay 2013-10-17 07:43:54 CEST
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffd6388700 (LWP 1273)]
0x00007ffff26a1ed2 in vgetargs1.34886 () from /usr/lib/libpython3.2mu.so.1.0
(gdb) backtrace
#0  0x00007ffff26a1ed2 in vgetargs1.34886 () from /usr/lib/libpython3.2mu.so.1.0
#1  0x00007ffff26a3fe9 in PyArg_ParseTuple () from /usr/lib/libpython3.2mu.so.1.0
Python Exception <class 'gdb.error'> There is no member named ob_item.: 
#2  0x00007ffff60930bb in Decoder_put (self=<optimized out>, args=) at type_decoder.c:106
#3  0x00007ffff2594420 in PyEval_EvalFrameEx () from /usr/lib/libpython3.2mu.so.1.0
#4  0x00007ffff25940f4 in PyEval_EvalFrameEx () from /usr/lib/libpython3.2mu.so.1.0
#5  0x00007ffff25940f4 in PyEval_EvalFrameEx () from /usr/lib/libpython3.2mu.so.1.0
#6  0x00007ffff26c7c66 in PyEval_EvalCodeEx () from /usr/lib/libpython3.2mu.so.1.0
#7  0x00007ffff26c7f86 in function_call.13773 () from /usr/lib/libpython3.2mu.so.1.0
#8  0x00007ffff26402be in PyObject_Call () from /usr/lib/libpython3.2mu.so.1.0
#9  0x00007ffff25aa104 in method_call.8646 () from /usr/lib/libpython3.2mu.so.1.0
#10 0x00007ffff26402be in PyObject_Call () from /usr/lib/libpython3.2mu.so.1.0
#11 0x00007ffff25a8d0b in call_function_tail () from /usr/lib/libpython3.2mu.so.1.0
#12 0x00007ffff26a9cf1 in PyObject_CallMethod () from /usr/lib/libpython3.2mu.so.1.0
#13 0x00007ffff6090e18 in srd_inst_decode (start_samplenum=0, di=0x1443020, inbuf=0x7fffd6386d00 "", inbuflen=4096) at controller.c:836
#14 0x00007ffff60912e2 in srd_session_send (sess=<optimized out>, start_samplenum=0, inbuf=0x7fffd6386d00 "", inbuflen=4096)
    at controller.c:1085
#15 0x00000000004a1322 in pv::data::Decoder::decode_proc (this=0x1442ec0, data=...)
    at /home/mranostay/sigrok-sources/pulseview/pv/data/decoder.cpp:200
#16 0x00
Comment 3 Uwe Hermann 2013-10-18 02:47:16 CEST
OK, this one is interesting.

I don't seem to be able to reproduce this with an fx2lafw device, or with the ChronoVu LA8. However, some other LAs seem to have issues. I kinda doubt this is related to libsigrok or hardware drivers, it's more likely some mutex issue in PulseView or possibly some issue in libsigrokdecode the PulseView use-case is exposing.

I did the following for a few LAs: (1) Capture some data on the LA in PulseView (no probes attached to anything, just all-zero input as a result), (2) add a PD in PulseView (doesn't seem to matter which one).

The crash generally looks like this when PulseView is run with -l 5:

[...]
srd: Created session 1.
srd: Creating new uart instance.
srd: got option 'parity_type' = 'none'
srd: got option 'baudrate' = int64 115200
srd: got option 'bit_order' = 'lsb-first'
srd: got option 'parity_check' = 'yes'
srd: got option 'format' = 'ascii'
srd: got option 'num_stop_bits' = '1'
srd: got option 'num_data_bits' = int64 8
srd: set probes called for instance uart with list of 2 probes
srd: Setting probe mapping: tx (index 1) = probe 1.
srd: Setting probe mapping: rx (index 0) = probe 0.
srd: Final probe map:
srd:  - index 0 = probe 0 (required)
srd:  - index 1 = probe 1 (required)
srd: Calling start() on all instances in session 1 with 2 probes, unitsize 2, samplerate 10000000.
srd: Calling start() method on protocol decoder instance uart.
srd: Instance uart creating new output type 1 for uart.
srd: Instance uart creating new output type 0 for uart.
srd: Registering new callback for output type 0.
Segmentation fault (core dumped)


See below for gdb output of some devices:


On the Logic16:

Program terminated with signal 11, Segmentation fault.
#0  0x00007f8af5e29267 in __dcigettext (domainname=0x7f8af5f60b33 "libc", msgid1=msgid1@entry=0x7f8af5f64a20 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", msgid2=msgid2@entry=0x0, plural=plural@entry=0, n=n@entry=0,
    category=category@entry=5) at dcigettext.c:459
459     dcigettext.c: No such file or directory.
(gdb) bt
#0  0x00007f8af5e29267 in __dcigettext (domainname=0x7f8af5f60b33 "libc", msgid1=msgid1@entry=0x7f8af5f64a20 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", msgid2=msgid2@entry=0x0, plural=plural@entry=0, n=n@entry=0,
    category=category@entry=5) at dcigettext.c:459
#1  0x00007f8af5e27d5f in __GI___dcgettext (domainname=<optimized out>, msgid=msgid@entry=0x7f8af5f64a20 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", category=category@entry=5) at dcgettext.c:52
#2  0x00007f8af5e273be in __GI___assert_fail (assertion=0x4f08f6 "!pthread_mutex_unlock(&m)", file=0x4f08a8 "/usr/include/boost/thread/pthread/recursive_mutex.hpp", line=115, function=
    0x4f0ae0 "void boost::recursive_mutex::unlock()") at assert.c:101
#3  0x00000000004ae739 in boost::recursive_mutex::unlock() ()
#4  0x00000000004ae783 in boost::lock_guard<boost::recursive_mutex>::~lock_guard() ()
#5  0x00000000004b36cf in pv::data::LogicSnapshot::get_samples(unsigned char*, long, long) const ()
#6  0x00000000004af896 in pv::data::Decoder::decode_proc(boost::shared_ptr<pv::data::Logic>) ()
#7  0x0000000000000000 in ?? ()
(gdb)


On the ZEROPLUS LAP-C(16032):

Program terminated with signal 11, Segmentation fault.
#0  __memcpy_ssse3 () at ../sysdeps/x86_64/multiarch/memcpy-ssse3.S:204
204     ../sysdeps/x86_64/multiarch/memcpy-ssse3.S: No such file or directory.
(gdb) bt
#0  __memcpy_ssse3 () at ../sysdeps/x86_64/multiarch/memcpy-ssse3.S:204
#1  0x00000000004b36c3 in pv::data::LogicSnapshot::get_samples(unsigned char*, long, long) const ()
#2  0x00000000004af896 in pv::data::Decoder::decode_proc(boost::shared_ptr<pv::data::Logic>) ()
#3  0x0000000000000000 in ?? ()
(gdb)


On the Ikalogic ScanaPLUS:

Program terminated with signal 11, Segmentation fault.
#0  0x00007fa552813267 in __dcigettext (domainname=0x7fa55294ab33 "libc", msgid1=msgid1@entry=0x7fa55294ea20 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", msgid2=msgid2@entry=0x0, plural=plural@entry=0, n=n@entry=0,
    category=category@entry=5) at dcigettext.c:459
459     dcigettext.c: No such file or directory.
(gdb) bt
#0  0x00007fa552813267 in __dcigettext (domainname=0x7fa55294ab33 "libc", msgid1=msgid1@entry=0x7fa55294ea20 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", msgid2=msgid2@entry=0x0, plural=plural@entry=0, n=n@entry=0,
    category=category@entry=5) at dcigettext.c:459
#1  0x00007fa552811d5f in __GI___dcgettext (domainname=<optimized out>, msgid=msgid@entry=0x7fa55294ea20 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", category=category@entry=5) at dcgettext.c:52
#2  0x00007fa5528113be in __GI___assert_fail (assertion=0x4f08f6 "!pthread_mutex_unlock(&m)", file=0x4f08a8 "/usr/include/boost/thread/pthread/recursive_mutex.hpp", line=115, function=
    0x4f0ae0 "void boost::recursive_mutex::unlock()") at assert.c:101
#3  0x00000000004ae739 in boost::recursive_mutex::unlock() ()
#4  0x00000000004ae783 in boost::lock_guard<boost::recursive_mutex>::~lock_guard() ()
#5  0x00000000004b36cf in pv::data::LogicSnapshot::get_samples(unsigned char*, long, long) const ()
#6  0x00000000004af896 in pv::data::Decoder::decode_proc(boost::shared_ptr<pv::data::Logic>) ()
#7  0x0000000000000000 in ?? ()
(gdb)
Comment 4 Uwe Hermann 2014-01-23 17:34:26 CET
This is indeed a duplicate of #225, and is thus fixed.

I've tested against a Sysclk LWLA1034, Zeroplus LAP-C(16032), and Ikalogic ScanaPlus.

*** This bug has been marked as a duplicate of bug 225 ***