Saturday, July 17, 2010

Concurrently Beating a Dead Horse

Sometimes it seems like the horse is already dead but today I came across a fun article describing Erlang's concurrency model.  This is no surprise because Actors (along with STM) seems to be a favorite on the various programming sites.

So why continue the beatings?  I thought I'd solidify my thoughts by writing them out.  I also find exploring these principles beneficial for my projects to improve design even if the high level implementations wouldn't fit within the architecture for some of them.


In any concurrency discussion I think its good to remind ourselves of the pillars of concurrency to help avoid silver-bullet-syndrome.  I highly recommend Herb Sutter's Effective Concurrency series, especially the introductory article where he talks about Callahan's Pillars.
(sorry for the text image, that is how the content was stored in the original article)

STM

First two items of background.  The first is that I'm one of those odd people who enjoys the area where hardware and software meet.  The second is that my current job is in software for test and automation systems.  Having to warn customers about the possibility of Death and Dismemberment is not pleasant.

With those in mind I tend to worry more about how a system, pattern, architecture, whatever talks to the real world than how parallel it can get.  This leads me to dislike STM as my default model of concurrency to use because the world is not transactional.  If you have a program controlling a swinging arm of death, what does a revert mean?  One of Microsoft's researchers on STM.NET admits to this problem and many others (but again it does have its uses).

Dataflow

I enjoyed my experiences with digital design in college but at the moment it is not the field for me.  Also VHDL and Verilog leave much to be desired.

I find the idea of applying the natural concurrency of digital hardware to software fascinating.

In the industry I work in there is a popular data flow programming language.  For high level integration or large datasets it seems great to have the implicit parallelism.  I wonder how much parallelism it really gets with small data sets in simple applications.  The language has a couple draw backs to me for the projects I work on outside of work:
  • Single vendor (never a fan of lock-in)
  • Except for the clunky implementation of Events, it does not handle other concurrency models which a task might be better for.
  • Not designed well for general purpose programming situations
  • Even though some applications seem fast to write the environment feels like it hinders me.
Actors

Now on to actors.  I've always been a fan of the simple concept of handling concurrency through shared-nothing message-passing like Erlang.  I can easily conceptualize how this would work with hardware.  I enjoy the fact that each process is normally short-lived enough that the GC doesn't even need to be called.  These combine to give it soft real-time which is a field I've always found fascinating.

How does it handle various models of concurrency?  I like the idea of having access to an Actor's low level primitives in Erlang to more naturally implement something else like a finite-state-machine.  The low-level message passing is events so you get that.  I don't think I'm doing enough embarrassing parallel tasks to miss having dataflow.  I bet you could model a form of STM for groups of actors with the error system.

Sadly I've never really done anything in Erlang because:
  • At work when I do something in my language of choice I need to optimize for implementation time and not reduced run times which my investigation into Erlang seems to indicate it would be poor at. 
  • With my open source work it usually centers around another component for which I need to have a means of talking to.  Python has a lot of bindings.  I feel like with Erlang I'd have to roll my own.
Actors lose their appeal to me in other languages.  In Python the boilerplate and the overhead to achieve shared-nothing seems too much.  Maybe its just FUD but even in Scala it sounds unappealing.  This leads me to my last item

Events

Events aren't as high-level of a concurrency model as STM and Actors but they can sometimes have a lower barrier to entry, especially when integrating with certain frameworks.  Previously all of my open source applications have been glib based and most were GTK based.  I found glib events a pleasure to work with.

Simple UI callbacks work well.  I could simplify more complex sequences of callbacks like with DBus by abstracting them away but I've tended to abstract the concept so I can do things like map/reduce DBus calls.

Registering idle and timeout events seem to be atomic which makes life great certain applications of parallelism.  What I've tended to do is keep a thread for a group of data (like my connection state for Google Voice).  I then push to a queue a task I want the thread to apply to the data (make a GV call, get contacts, etc) and I include a callback that is registered as an idle callback for when the data is ready.  I have no management of locks to worry about.

I've even been able to abstract away the callbacks to keep good spatial locality on the logic in the code.

An example from The One Ring, my Telepathy Connection Manager for Google Voice:
    @misc_utils.log_exception(_moduleLogger)
    def RequestStreams(self, contactId, streamTypes):
        """
        For org.freedesktop.Telepathy.Channel.Type.StreamedMedia

        @returns [(Stream ID, contact, stream type, stream state, stream direction, pending send flags)]
        """
        contact = self._conn.get_handle_by_id(telepathy.constants.HANDLE_TYPE_CONTACT, contactId)
        assert self.__contactHandle == contact, "%r != %r" % (self.__contactHandle, contact)

        le = gobject_utils.AsyncLinearExecution(self._conn.session.pool, self._call)
        le.start(contact)


        streamId = 0
        streamState = telepathy.constants.MEDIA_STREAM_STATE_CONNECTED
        streamDirection = telepathy.constants.MEDIA_STREAM_DIRECTION_BIDIRECTIONAL
        pendingSendFlags = telepathy.constants.MEDIA_STREAM_PENDING_REMOTE_SEND
        return [(streamId, contact, streamTypes[0], streamState, streamDirection, pendingSendFlags)]

    @misc_utils.log_exception(_moduleLogger)
    def _call(self, contact):
        contactNumber = contact.phoneNumber

        self.__calledNumber = contactNumber
        self.CallStateChanged(self.__contactHandle, telepathy.constants.CHANNEL_CALL_STATE_RINGING)

        try:
            result = yield (
                self._conn.session.backend.call,
                (contactNumber, ),
                {},
            )
        except Exception:
            _moduleLogger.exception("While placing call to %s" % (self.__calledNumber, ))
            return


        self._delayedClose.start(seconds=0)
        self.CallStateChanged(self.__contactHandle, telepathy.constants.CHANNEL_CALL_STATE_FORWARDED)

In the DBus callback (as in, happens in the main-loop) RequestStreams I create an AsyncLinearExecution object (eh, couldn't think of something better for a name) that runs the generator _call as an idle callback (again, executes in the main-loop).  The yield passes a function, args, and kwds to the thread passed into AsyncLinearExecution's __init__ for it to run without blocking the main-loop.  When it finishes it takes the results and passes that out of the yield.  I can even move the exception from the thread to be thrown at the yield.


I maintain AsyncLinearExecution as an object so I can "cancel" it at anytime.  My style of cancellation is for the thread's results to be ignored.   In the try/except this gets represented as a StopIteration exception.  This style of cancellation greatly simplifies my shutdown logic for a Telepathy Connection.

I've not read too much about other event based systems for Python but if they don't have a main-loop I scratch my head as to how they integrate with threading and other features.

In conclusion:
  • STM: Doesn't seem appropriate for any of my applications
  • Dataflow: Digital hardware is fun but not a fan of the main implementation in software
  • Actors: Cool but lack of integration with libraries limits my use of languages designed around it
  • Raw Events: a lot of fun for my open source applications.
I thought I'd also note that we have a fairly young wiki page that touches on threading for Python and one more specifically for PyQt applications.  I've done some work on them on hope to have more information to add to them later as I continue porting my applications to Qt.

No comments:

Post a Comment