With this new implementation, all communication between the kernel and the
client happens in a subprocess. When the client would like to send a message,
the parent emacs process generates the required plist and sends it to the
subprocess for encoding and sending to the kernel. When a message is received,
the subprocess decodes it and prints it to the pipe for the parent emacs
process to read.
This implementation also introduces the use of futures to avoid having to wait
for subprocess output when sending a message to the kernel. Every
`jupyter-request-*` function now returns a primitive future object which is
just a cons cell with the `car` equal to `:jupyter-future`. When the `cdr` of
the future is non-nil, then it is the message ID of the sent request. This acts
as a check to ensure that the message ID is available from the future object,
if the `cdr` is nil the ID is not available, but if the `cdr` is non-nil then
it is the message ID. The convenience function `jupyter-ensure-id` ensures that
the message ID is available and returns the ID.
The future acts as a stand in for the message ID of the encoded request which
will be retrieved from the subprocess once the message has been encoded and
sent to the kernel. This future object is meant to be passed to
`jupyter-add-receive-callback` and other related functions the same way as an
actual message id.
This function returns a socket based on a channel type and endpoint. The
channel type is mapped to the required socket type based on
`jupyter-channel-socket-types`. This is a convenience function to avoid having
to use `jupyter-channel-socket-types` directly.
I believe I was misunderstanding the use of `zmq-poll` on the file descriptors
in a subprocess. It seems that polling the file descriptors doesn't seem to
work.
Currently I am sticking to running periodic timers in emacs itself to process
messages. This may slow down emacs when connecting to lots of clients that are
receiving and sending lots of messages.
What I would like to move towards is having a subprocess which connects to the
required endpoints and listens for incoming messages. When a message arrives,
it decodes it and sends it back to the parent emacs process. When I would like
to send a message, I just send the raw plist to the subprocess and it encodes
and sends it through the socket. So the subprocess will take care of
encoding/decoding messages and sending/receiving messages on sockets. Whereas
the parent emacs process will send/receive plists.
The problem appears to be that I was not waiting until the kernel was finished
handshaking with the `jupyter console` application. Kernels are currently
started using the `jupyter console` command which was causing messages not to
be received by `jupyter-kernel-client`
The jupyter v5.0 protocol specifies that for every request that is handled, a status: idle message will be sent when the request is complete. When receiving this idle message is when callbacks are removed from the table.
I think it makes more sense to prefix with `request` instead of send`. Since
`jupyter-send-execute` seems ambiguous unless you say something like
`jupyter-send-execute-request`. But if you say `jupyter-request-execute` it has
essentially the same meaning as `jupyter-send-execute-request` with the extra
word. It also works better for functions like `jupyter-request-kernel-info` as
opposed to `jupyter-send-kernel-info` which seems to imply that you are sending
the kernel info.
- All messages sent are prefixed like `jupyter-send-*`. So that an
`execute_request` maps to `jupyter-send-execute`.
- All received message handlers are prefixed with `jupyter-handle-*`. So that
an `execute_reply` message would be mapped to `jupyter-handle-execute`. Also
for IOPub messages, the handlers are also prefixed with
`jupyter-handle-<iopub message type>` so that a `stream` message would map to
`jupyter-handle-stream`.