- After a buffer change, check to see if a continuation prompt needs to be
inserted
- Before a buffer change, check to see if a continuation prompt needs to be
removed
Instead of directly sending every received message to the parent emacs process
at the moment we receive it. The messages are stored in a queue. Only when we
do not receive a message from the kernel for two polling periods or when the
queue is full will the messages be sent to the parent process. When the
messages are sent, they are first sorted by their timestamp and prioritized
based on channel.
`jupyter-request` encapsulates a request ID, request callbacks, and a flag
variable which tells you if the kernel has sent an idle message for this
request.
`jupyter-callback` encapsulates a callback function and a flag variable
determining if the callback has run at least once. The `callbacks` field of a
`jupyter-request` is an alist mapping reply types to `jupyter-callback`
objects.
Whenever a message is sent to the kernel a new `jupyter-request` object is
created and returned from one of the `jupyter-request-*` methods. This object
holds all the required information to track a message that the kernel is
handling. When a message is received from the kernel, the client checks its
`requests` hash table for the `jupyter-request` object associated with the
message and runs any callbacks for the message and updates the flag variables
of the request and callback if necessary.
The request is considered complete and removed from the `requests` hash table
when an idle message has been received for the request and all callbacks have
run at least once. Note that this is almost surely does not handle all cases
since there may be situations where you would like a callback to run multiple
times while an idle message has already been sent.
If `t` is passed as the `MSG-TYPE` argument of `jupyter-add-receive-callback`,
then it signifies that the associated callback will run for every message that
is received in response to a request.
With this new implementation, all communication between the kernel and the
client happens in a subprocess. When the client would like to send a message,
the parent emacs process generates the required plist and sends it to the
subprocess for encoding and sending to the kernel. When a message is received,
the subprocess decodes it and prints it to the pipe for the parent emacs
process to read.
This implementation also introduces the use of futures to avoid having to wait
for subprocess output when sending a message to the kernel. Every
`jupyter-request-*` function now returns a primitive future object which is
just a cons cell with the `car` equal to `:jupyter-future`. When the `cdr` of
the future is non-nil, then it is the message ID of the sent request. This acts
as a check to ensure that the message ID is available from the future object,
if the `cdr` is nil the ID is not available, but if the `cdr` is non-nil then
it is the message ID. The convenience function `jupyter-ensure-id` ensures that
the message ID is available and returns the ID.
The future acts as a stand in for the message ID of the encoded request which
will be retrieved from the subprocess once the message has been encoded and
sent to the kernel. This future object is meant to be passed to
`jupyter-add-receive-callback` and other related functions the same way as an
actual message id.
This function returns a socket based on a channel type and endpoint. The
channel type is mapped to the required socket type based on
`jupyter-channel-socket-types`. This is a convenience function to avoid having
to use `jupyter-channel-socket-types` directly.
I believe I was misunderstanding the use of `zmq-poll` on the file descriptors
in a subprocess. It seems that polling the file descriptors doesn't seem to
work.
Currently I am sticking to running periodic timers in emacs itself to process
messages. This may slow down emacs when connecting to lots of clients that are
receiving and sending lots of messages.
What I would like to move towards is having a subprocess which connects to the
required endpoints and listens for incoming messages. When a message arrives,
it decodes it and sends it back to the parent emacs process. When I would like
to send a message, I just send the raw plist to the subprocess and it encodes
and sends it through the socket. So the subprocess will take care of
encoding/decoding messages and sending/receiving messages on sockets. Whereas
the parent emacs process will send/receive plists.
The problem appears to be that I was not waiting until the kernel was finished
handshaking with the `jupyter console` application. Kernels are currently
started using the `jupyter console` command which was causing messages not to
be received by `jupyter-kernel-client`