Remove samplerate from srd_decoder_logic_output_channel
This means that the samplerate for logic output channels is
implicitly determined by the input channel samplerate.
The motivation for this is that hard-coding a samplerate isn't
possible - but at the same time, it's also not possible to
determine the samplerate at the time the logic output channels
are initialized, as the samplerate can be set at runtime.
From my point of view, we would need one of two mechanisms to
make this work:
1) Allow creation of logic outputs at runtime via some
registration callback
or
2) Allow changing a logic output's samplerate after it has been
created, again requiring some kind of callback
To me, both currently are overkill because making the assumption
that samplerate_in = samplerate_out not only makes this problem
go away as it can easily be handled on the client side where
samplerate_in is already known, it also makes handling of the
logic data in the PDs easier.