Message passing in computer science is a form of communication used in parallel computing, object-oriented programming, and interprocess communication. In this model, processes or objects can send and receive messages (comprising zero or more bytes, complex data structures, or even segments of code) to other processes. By waiting for messages, processes can also synchronize.
Message passing is the paradigm of communication where messages are sent from a sender to one or more recipients. Forms of messages include (remote) method invocation, signals, and data packets. When designing a message passing system several choices are made:
- Whether messages are transferred reliably
- Whether messages are guaranteed to be delivered in order
- Whether messages are passed one-to-one, one-to-many (unicasting or multicast), or many-to-one (client–server).
- Whether communication is synchronous or asynchronous.
Implementations of concurrent systems that use message passing can either have message passing as an integral part of the language (Java RMI, DCOM, SOAP), or as a series of library calls from the language (node.js).
Synchronous or asynchronous message passing
Synchronous message passing systems require the sender and receiver to wait for each other to transfer the message. That is, the sender will not continue until the receiver has received the message.
Synchronous communication has two advantages.
- The first advantage is that reasoning about the program can be simplified in that there is a synchronisation point between sender and receiver on message transfer.
- The second advantage is that no buffering is required. The message can always be stored on the receiving side, because the sender will not continue until the receiver is ready.
Asynchronous message passing systems deliver a message from sender to receiver, without waiting for the receiver to be ready. The advantage of asynchronous communication is that the sender and receiver can overlap their computation because they do not wait for each other.
Synchronous communication can be built on top of asynchronous communication by ensuring that the sender always wait for an acknowledgement message from the receiver before continuing.
The buffer required in asynchronous communication can cause problems when it is full. A decision has to be made whether to block the sender or whether to discard future messages. If the sender is blocked, it may lead to an unexpected deadlock. If messages are dropped, then communication is no longer reliable.
- Event loop – web browser/web server, GUI interfaces (message loop in Windows), “everything is a file” in Unix/Linux
- Message queue – message-oriented middleware, synchronous (HTTP), asynchronous (AJAX), event-notification systems, publish/subscribe
- Durability (e.g. – whether or not queued data can be merely kept in memory, or if it mustn’t be lost, and thus must be stored on disk, or, more expensivestill, it must be committed more reliably to a DBMS)
- Security policies – which applications should have access to these messages?
- Message purging policies – queues or messages may have a Time to live
- Some systems support filtering data so that a subscriber may only see messages matching some pre-specified criteria of interest
- Delivery policies – do we need to guarantee that a message is delivered at least once, or no more than once?
- Routing policies – in a system with many queue servers, what servers should receive a message or a queue’s messages?
- Batching policies – should messages be delivered immediately? Or should the system wait a bit and try to deliver many messages at once?
- When should a message be considered “enqueued”? When one queue has it? Or when it has been forwarded to at least one remote queue? Or to all queues?
- A publisher may need to know when some or all subscribers have received a message.