1% BEGIN LICENSE BLOCK
2% Version: CMPL 1.1
3%
4% The contents of this file are subject to the Cisco-style Mozilla Public
5% License Version 1.1 (the "License"); you may not use this file except
6% in compliance with the License.  You may obtain a copy of the License
7% at www.eclipse-clp.org/license.
8% 
9% Software distributed under the License is distributed on an "AS IS"
10% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied.  See
11% the License for the specific language governing rights and limitations
12% under the License. 
13% 
14% The Original Code is  The ECLiPSe Constraint Logic Programming System. 
15% The Initial Developer of the Original Code is  Cisco Systems, Inc. 
16% Portions created by the Initial Developer are
17% Copyright (C) 2006 Cisco Systems, Inc.  All Rights Reserved.
18% 
19% Contributor(s): 
20% 
21% END LICENSE BLOCK
22\documentstyle[12pt]{ecrcreport}
23\def\eclipse{ECL$^i$PS$^e$\ }
24
25\reportref{KS-96-01}
26
27\date{August, 1996}
28
29\title{\center{
30Getting Started \\
31Building \\
32Distributed \eclipse Applications}}
33
34\author{Kees Schuerman \\
35Fuut 19, 5508 PV Veldhoven, The Netherlands}
36
37\abstract{The \eclipse constraint logic programming system has
38been enhanced with a message passing system that eases the construction 
39of distributed applications. This note introduces the message passing
40predicates and illustrates their usage in a few examples which can
41be a good starting point for building more advanced distributed 
42\eclipse applications.       
43}
44
45\begin{document}
46
47\maketitle	
48
49\input{psfig}
50
51\vspace{-0.8cm}
52
53\bibliographystyle{plain}
54
55
56%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
57
58\section{Introduction}
59\label{sec:intro}
60
61The past couple of years a considerable effort has been put into
62the parallelisation of the \eclipse constraint logic programming
63system \cite{eclipse:iclp94}. It was decided to not limit parallel 
64\eclipse to conventional shared memory machines, but to enlarge its 
65scope by basing its design and implementation on message passing 
66\cite{ecn_9404}. This would make parallel \eclipse available on 
67both parallel machines and heterogeneous computer networks.
68
69Parallel \eclipse hides its underlying message passing system from
70the \eclipse application developer. The message passing capabilities
71of the parallel \eclipse system can therefore not be used as a basis for 
72building distributed applications. Distributed \eclipse applications 
73rely therefore heavily on various low level socket predicates. Socket 
74programming is however quite complex and error prone. This motivated us to make 
75the message passing functionality of parallel \eclipse available to 
76the \eclipse application developer in the form of a small set of simple 
77predicates.     
78
79Section~\ref{sec:mps} presents a short overview of the \eclipse 
80message passing system. It is a high level abstraction of the 
81message passing facilities offered by the C-libraries AMSG, BMSG, 
82and NSRV \cite{bmsgref,amsgref,nsrvref}. Section~\ref{sec:examples}
83illustrates the utilisation of the \eclipse message passing
84predicates in a few examples which can
85be a good starting point for building more advanced distributed
86\eclipse applications. Finally, in section~\ref{sec:future} we have 
87a quick look at some issues that may be a topic for future work.
88
89
90%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
91
92\section{\eclipse Message Passing System}
93\label{sec:mps}
94
95\eclipse message passing supports asynchronous non-blocking 
96point-to-point communication in heterogeneous environments. The 
97end-points of communication are ports, i.e. messages are sent to and 
98received from ports. Messages are Prolog terms. 
99
100Ports have a unique port identifier and are associated with or owned 
101by an \eclipse process. While any \eclipse process can send messages 
102to a port, only the owner of a port can receive messages from it. The 
103owner of a port is the \eclipse process that allocated the port. Ports 
104are allocated and deallocated dynamically. 
105
106A port can be viewed as a uni-directional message order preserving
107reliable communication channel. Ports may have an associated
108application specific notification predicate that is invoked
109on the arrival of a message when the port is empty.
110
111The \eclipse message passing system incorporates a name service
112that enables processes to associate names with their ports. Ports
113can be {\it registered}, {\it looked up}, and {\it deregistered}.
114Port owners register their ports under unique and agreed upon names 
115with the name service. Port users do a port lookup for acquiring a 
116port's identifier that is required for sending messages to the port.
117
118The name service is provided by a single process referred to as the
119{\it name server} or {\it nsrv} process. This process integrates the
120name service with an identifier generator which is used as a source
121for the unique port identifiers. This implies that only the processes
122that make use of the same name server can communicate with each other. 
123
124\begin{figure}[hbt]
125\centerline{\psfig{figure=architecture.eps,width=6.00in}}
126\center{
127\caption{\label{fig:architecture}
128         {Distributed \eclipse Architecture}}}
129\end{figure}
130
131Figure~\ref{fig:architecture} shows a typical setting of a distributed
132\eclipse application. It are several communicating \eclipse processes
133on the Internet supported by a single name server process, i.e. the
134{\it nsrv} process. Note that \eclipse message passing is based on
135the TCP/IP standard which is likely to be the basis for your local area
136network.
137
138The following predicates are available for building distributed \eclipse
139applications:
140
141\begin{center}
142\begin{tabular}{l}
143mps\_init(+Host) \\
144mps\_ping(+Host) \\
145mps\_port\_register(+Key, +Name, +Signature, +Port) \\
146mps\_port\_lookup(+Key, +Name, -Port) \\
147mps\_port\_deregister(+Key, +Name, +Signature) \\
148mps\_port\_allocate(-Port, +NotifPred) \\
149mps\_port\_deallocate(+Port) \\
150mps\_send(+Port, ?Term) \\
151mps\_receive(+Port, ?Term) \\
152mps\_exit \\
153\end{tabular}
154\end{center}
155
156Initialisation for message passing and association with a name server
157is achieved by the {\it mps\_init} predicate and it is the first step 
158to be taken by every process for becoming part of a distributed \eclipse 
159application. The name server is identified by the name of the host on 
160which the name server resides. The name of the host is a simple string, 
161e.g. "tricky", "tricky.ecrc.de", or "141.1.3.150".
162
163With {\it mps\_ping} the name server can be pinged, i.e. it succeeds if 
164the name server is up and running. Normally, it is not used in an 
165application, but it may be useful for debugging purposes.
166
167The predicates {\it mps\_port\_register}, {\it mps\_port\_lookup}, and 
168{\it mps\_port\_deregister} are used for registering, looking up and
169deregistering ports with the name server, respectively. Deregistration
170is protected by a signature that is passed to the name server at
171registration time. To support multiple sessions of a distributed
172application sharing a single name server, the name server predicates
173have a session key parameter. The key, name and signature parameters are 
174strings. 
175
176Ports are allocated and deallocated with the {\it mps\_port\_allocate} and
177{\it mps\_port\_deallocate} predicates. As already mentioned before, a port
178may be associated with a notification predicate. A port's notification
179predicate is called by the message passing system on message delivery
180when the port is empty. Notification predicates have one parameter via
181which the message passing system passes to the application, the identifier 
182of the empty port on which a message arrived.
183
184Messages, i.e. Prolog terms, are sent to and received from ports
185by means of the predicates {\it mps\_send} and {\it mps\_receive}, 
186respectively. While the other predicates are mainly facilitators, the 
187send and receive predicates take care of the actual communication which is
188so essential for distributed applications.
189
190Although not strictly necessary, processes invoke {\it mps\_exit} just 
191before they terminate. This involves disassociation from the name server 
192and other processes.
193
194
195%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
196
197\section{Constructing Distributed Applications}
198\label{sec:examples}
199
200\subsection{Name Server}
201
202To start with the construction of distributed \eclipse applications,
203try out the basic features. Start a name server on the host of your
204choice, e.g. workstation "tricky", and try to interact with it from 
205other hosts.
206
207The name server with its C-language API is documented in \cite{nsrvref}. 
208We recommend you to not use the advanced features but start a name server 
209always with the following command:
210
211\begin{verbatim}
212       tricky% nsrv -npds -nshm
213\end{verbatim}
214
215Now you are ready for your first experiment, i.e. pinging the name server.
216With a name server running on host "tricky", the \eclipse session given 
217in figure~\ref{fig:pinging} illustrates the results of pinging host
218"tricky" with a name server on it and pinging host "lucky" without a
219name server.
220
221\begin{figure}[hbt]
222\begin{verbatim}
223       ?- mps_ping("tricky"). 
224       yes
225       ?- mps_ping("lucky"). 
226       no
227\end{verbatim}
228\center{
229\caption{\label{fig:pinging} {Pinging the Name Server}}}
230\end{figure} 
231
232We suggest you do this experiment at your site, because a functioning
233name server is essential for getting a distributed \eclipse application
234running on your computer network. Be patient when the mps\_ping predicate
235seems to hang, because it takes a while before mps\_ping decides to fail.
236
237
238\subsection{Initialisation and Termination}
239
240All the processes of a distributed application start with associating 
241themselves with the name server by means of the mps\_init predicate.
242Just before a process terminates it invokes mps\_exit to disassociate
243itself from the name server again.
244
245\begin{figure}[hbt]
246\begin{verbatim}
247       ? mps_init("tricky"). 
248       yes
249       ? mps_port_allocate(Port,true/0).
250       Port = 1234
251       yes
252       ? mps_receive(1234,Term).
253       no
254       ? mps_send(1234,"Hello World !").
255       yes
256       ? mps_receive(1234,Term).
257       Term = "Hello World !" 
258       yes
259       ? mps_receive(1234,Term).
260       no
261       ? mps_port_deallocate(1234).
262       yes
263       ? mps_exit. 
264       yes
265\end{verbatim}
266\center{
267\caption{\label{fig:local} {Local Communication}}}
268\end{figure} 
269
270Figure~\ref{fig:local} shows an \eclipse session demonstrating the most
271important message passing predicates, i.e. allocating/deallocating
272ports and sending/receiving messages. A message, i.e. the string 
273 "Hello World !", is sent to and received from a port. The first
274invocation of mps\_receive fails and illustrates its non-blocking
275property.
276
277The send and receive are done by the same process, but can as well be
278done by different processes on different machines. When you take this
279as an exercise, you will notice that you are in fact communicating the 
280port identifier, 1234 in the example above, from the port's owner to the 
281port's sender(s). As we will see quickly, this is not necessary if
282we utilise the name service.
283
284Figure~\ref{fig:server_init} and \ref{fig:client_init} show examples
285of initialisation and termination predicates for a single-port
286server and client, respectively. Like in the previous example, ports
287are allocated without an associated notification predicate. An
288important difference is that the name server comes into play.
289
290The server allocates a port to which clients are supposed to send
291requests. The server port is registered with the name server under 
292an agreed and well known name, e.g. a time server in Munich registered 
293its port under the name "MunichTime". Clients knowing the name
294of a server port can contact the name service and lookup the server
295port, more precisely the identifier of the server port, which is 
296required for sending requests to it.
297
298\begin{figure}[hbt]
299\begin{verbatim}
300server_init(Host,Key,Name,Signature,RequestPort) :- 
301        mps_init(Host),
302        mps_port_allocate(RequestPort,true/0),
303        mps_port_register(Key,Name,Signature,RequestPort).
304
305server_exit(Key,Name,Signature,RequestPort) :-
306        mps_port_deregister(Key,Name,Signature),
307        mps_port_deallocate(RequestPort),
308        mps_exit.
309\end{verbatim}
310\center{
311\caption{\label{fig:server_init} {Server Initialisation and Termination}}}
312\end{figure} 
313
314In general, servers do not register their ports until they are ready for 
315receiving requests. Since a port lookup fails if the port is not registered 
316yet, clients typically repeat the lookup process until it succeeds. The
317name server is thus not only used as a medium to pass server port
318identifiers from servers to clients, but also as a means of synchronization.
319
320\begin{figure}[hbt]
321\begin{verbatim}
322client_init(Host,Key,Name,RequestPort,ReplyPort) :- 
323        mps_init(Host),
324        mps_port_allocate(ReplyPort,true/0),
325        repeat,
326        mps_port_lookup(Key,Name,RequestPort),!.
327
328client_exit(Key,ReplyPort) :-
329        mps_port_deallocate(ReplyPort),
330        mps_exit.
331\end{verbatim}
332\center{
333\caption{\label{fig:client_init} {Client Initialisation and Termination}}}
334\end{figure} 
335
336Clients send a request to the server port and expect a reply from the
337server at their reply port. Reply ports are in general not registered
338since their identifiers can be communicated to the server on the back
339of the request. This is illustrated in the examples to follow.
340
341
342\subsection{Polling Servers and Clients}
343
344Figures~\ref{fig:server_polling} and \ref{fig:client_polling} show 
345templates of a polling server and client, respectively. The server is 
346polling its request port for requests and the client is polling its 
347reply port for replies. 
348
349Polling is consuming processor cycles and is introducing some
350complexity. The latter is especially apparent when there are many ports 
351to poll and ports are allocated and deallocated dynamically. Polling
352can however be avoided elegantly by exploiting the message arrival
353notification mechanism.
354
355\begin{figure}
356\begin{verbatim}
357server_run(Host,Key,Name,Signature) :- 
358        server_init(Host,Key,Name,Signature,RequestPort),
359        server_loop(RequestPort).
360
361server_loop(RequestPort) :- 
362        repeat,
363        receive_request(RequestPort,(ReplyPort,Request)),
364        process_request(Request,Reply),
365        send_reply(ReplyPort,Reply),
366        fail.
367
368receive_request(RequestPort,(ReplyPort,Request)) :- 
369        repeat,
370        mps_receive(RequestPort,(ReplyPort,Request),!.
371
372send_reply(ReplyPort,Reply) :-
373        mps_send(ReplyPort,Reply).
374\end{verbatim}
375\caption{\label{fig:server_polling} {Request Polling Server}}
376\end{figure} 
377
378\begin{figure} 
379\begin{verbatim}
380client_run(Host,Key,Name,Request,Reply) :- 
381        client_init(Host,Key,Name,RequestPort,ReplyPort),
382        send_request(RequestPort,Request,ReplyPort),
383        receive_reply(ReplyPort,Reply),
384        ...,
385        ...,
386        client_exit(ReplyPort).
387        
388send_request(RequestPort,Request,ReplyPort) :- 
389      mps_send(RequestPort,(ReplyPort,Request)).
390
391receive_reply(ReplyPort,Reply) :- 
392      repeat,
393      mps_receive(ReplyPort,Reply),!.
394\end{verbatim}
395\caption{\label{fig:client_polling} {Reply Polling Client}}
396\end{figure}
397
398\subsection{Message Arrival Notification}
399
400Figure~\ref{fig:server_interrupt} shows a template of a simple request
401demanding server. The server blocks in the predicate {\it sleep/0} until
402an event occurs, e.g. the arrival of a message on an empty port. Such
403an event implies the invocation of the port's notification predicate.
404
405\begin{figure}[hbt]
406\begin{verbatim}
407server_run(Host,Key,Name,Signature) :- 
408        server_init(Host,Key,Name,Signature,RequestPort),
409        server_loop(RequestPort).
410
411server_init(Host,Key,Name,Signature,RequestPort) :- 
412        mps_init(Host),
413        mps_port_allocate(RequestPort,notify/1),
414        mps_port_register(Key,Name,Signature,RequestPort).
415
416notify(RequestPort) :- 
417        mps_receive(RequestPort,(ReplyPort,Request),
418        process_request(Request,Reply),
419        send_reply(ReplyPort,Reply),!,
420        notify(RequestPort).
421notify(_) :- true. 
422
423server_loop(RequestPort) :- 
424        repeat,
425        sleep,
426        fail.
427\end{verbatim}
428\caption{\label{fig:server_interrupt} {Request Demanding Server}}
429\end{figure} 
430
431A notification predicate should have received all the messages from a port 
432before it returns. This ensures that further message arrivals will also
433be notified. To avoid deadlocks it is recommended to keep notification
434predicates as simple as possible. Notification predicates should not take 
435much time per message and not wait for events that may or may not occur.
436In general, simple requests are serviced in the notification predicate
437itself while complex requests are queued, e.g. in a local port, to be
438serviced at a later time. This contributes to the responsiveness of the
439server to simple requests.
440
441A very simple but interesting server is the echo server whose straight-forward
442{\it process\_request} predicate is given in figure~\ref{fig:server_echo}. It
443is interesting because it can be used to measure the latency of your
444computer network. This would require a client that reads the clock and
445sends the clock value in a request to the echo server resident on another 
446host. This same clock value will come back to the client in the server's reply,
447i.e. the echo, and after reading the clock then again and a simple subtraction 
448the message turnaround time is determined. 
449
450\begin{figure}[hbt]
451\begin{verbatim}
452process_request(Request,Request).
453\end{verbatim}
454\caption{\label{fig:server_echo} {Echo Server}}
455\end{figure} 
456
457\subsection{Advanced Applications}
458
459The examples presented so far use only a single port per process and
460make a clear distinction between servers and clients. Processes may
461however have multiple ports and be server and client at the same time.
462Every port may have its own dedicated notification predicate, but a
463notification predicate may also be associated with several ports because
464its parameter specifies the port from which it should receive messages.
465
466
467%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
468
469\section{Future Work}
470\label{sec:future}
471
472The message passing system presented in this note provides a basis
473for experimenting with distributed \eclipse applications. 
474Figure~\ref{fig:architecure} shows that it is a closed environment,
475i.e. only \eclipse processes can communicate with other \eclipse
476processes. A logical step is to provide an interface to the message
477passing system from C. Alternatively, port adapters can be developed,
478e.g. a socket adaptor.
479
480The implementation of the \eclipse message passing system relies
481on some features of the UNIX mmap() primitive that are not supported
482by Linux. Until this dependency has been removed \eclipse message
483passing is not supported on Linux platforms.
484
485The \eclipse message passing system is based on TCP/IP stream communication
486which puts some restrictions on its scale. Distributed applications
487requiring  over a hundred computers is regarded beyond the scope of 
488the current version of the \eclipse message passing system.
489
490We regard the most important issue for the near future is to acquire
491experience with building distributed \eclipse applications. This will
492make clear what is actually required beyond the small and simple set of 
493message passing predicates that we have made availabe to you today.
494
495
496%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
497
498\section*{Acknowledgments}
499Micha Meier stimulated me to bring the functionality of the
500message passing system underlying the parallel \eclipse system
501available to the \eclipse application developer. Micha implemented
502the message arrival notification and completed the integration 
503with the \eclipse machinery.
504
505\bibliography{outline}
506
507\end{document}
508
509