Futures and promises

Futures and promises

In computer science, future, promise, and delay refer to constructs used for synchronization in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially not known, usually because the computation of its value has not yet completed.

The term "promise" was proposed in 1976 by Daniel P. Friedman and David Wise,[1] and Peter Hibbard called it "eventual".[2] A somewhat similar concept "future" was introduced in 1977 in a paper by Henry Baker and Carl Hewitt. [3]

The terms "future", "promise", and "delay" are often used interchangeably, although some differences in usage between "future" and "promise" are discussed below. Setting the value of a future is also called "resolving", "fulfilling", or "binding" it.

Contents

Implicit vs explicit

Use of futures may be implicit (any use of the future automatically obtains its value, as if it were an ordinary reference) or explicit (the user must call a function to obtain the value, such as the get method of java.util.concurrent.Future in Java). Obtaining the value of an explicit future can be called "stinging" or "forcing". Explicit futures may be implemented as a library, whereas implicit futures require language support.

The original Baker and Hewitt paper described implicit futures, which are naturally supported in the Actor model of computation and pure object-oriented programming languages like Smalltalk. The Friedman and Wise paper described only explicit futures, probably reflecting the difficulty of efficiently implementing implicit futures on stock hardware. The difficulty is that stock hardware does not deal with futures for primitive data types like integers. For example, an add instruction does not know how to deal with 3 + future factorial(100000). In pure object or Actor languages this problem can be solved by sending future factorial(100000) the message +[3], which asks the future to add 3 to itself and return the result. Note that the message passing approach works regardless of when factorial(100000) finishes computation and that no stinging/forcing is required.

Promise pipelining

The use of futures can dramatically reduce latency in distributed systems. For instance, futures enable promise pipelining,[4][5] as implemented in the E and Joule programming languages, which was also called call-stream in the Argus programming language.

Consider an expression involving conventional remote procedure calls, such as:

 t3 := ( x.a() ).c( y.b() )

which could be expanded to

 t1 := x.a();
 t2 := y.b();
 t3 := t1.c(t2);

Each statement requires a message to be sent and a reply received before the next statement can proceed. Suppose, for example, that x, y, t1, and t2 are all located on the same remote machine. In this case, two complete network round-trips to that machine must take place before the third statement can begin to execute. The third statement will then cause yet another round-trip to the same remote machine.

Using futures, the above expression could be written

 t3 := (x <- a()) <- c(y <- b())

which could be expanded to

 t1 := x <- a();
 t2 := y <- b();
 t3 := t1 <- c(t2);

The syntax used here is that of the E programming language, where x <- a() means to send the message a() asynchronously to x. All three variables are immediately assigned futures for their results, and execution proceeds to subsequent statements. Later attempts to resolve the value of t3 may cause a delay; however, pipelining can reduce the number of round-trips required. If, as in the previous example, x, y, t1, and t2 are all located on the same remote machine, a pipelined implementation can compute t3 with one round-trip instead of three. Because all three messages are destined for objects which are on the same remote machine, only one request need be sent and only one response need be received containing the result. Note also that the send t1 <- c(t2) would not block even if t1 and t2 were on different machines to each other, or to x or y.

Promise pipelining should be distinguished from parallel asynchronous message passing. In a system supporting parallel message passing but not pipelining, the message sends x <- a() and y <- b() in the above example could proceed in parallel, but the send of t1 <- c(t2) would have to wait until both t1 and t2 had been received, even when x, y, t1, and t2 are on the same remote machine. The relative latency advantage of pipelining becomes even greater in more complicated situations involving many messages.

Promise pipelining also should not be confused with pipelined message processing in Actor systems, where it is possible for an actor to specify and begin executing a behaviour for the next message before having completed processing of the current message.

Read-only views

In some programming languages such as Oz, E, and AmbientTalk, it is possible to obtain a "read-only view" of a future, which allows reading its value when resolved, but does not permit resolving it:

  • In Oz, the !! operator is used to obtain a read-only view.
  • In E and AmbientTalk, a future is represented by a pair of values called a "promise/resolver pair". The promise represents the read-only view, and the resolver is required in order to set the future's value.
  • In C++0x a std::future provides a read-only view. The value is set directly by using a std::promise, or set to the result of a function call using std::packaged_task or std::async.
  • In the Dojo Toolkit's Deferred API as of version 1.5, a "consumer-only promise object" represents a read-only view[6].
  • In Alice ML, futures only provide a "read-only view", whereas a promise contains both a future and the ability to resolve the future[7][8]

Support for read-only views is consistent with the Principle of Least Authority, since it enables the ability to set the value to be restricted to subjects that need to set it. In a system that also supports pipelining, the sender of an asynchronous message (with result) receives the read-only promise for the result, and the target of the message receives the resolver.

Thread-specific futures

Some languages, such as Alice ML, define futures that are associated with a specific thread that computes the future's value[8]. This computation may be started either eagerly when the future is created, or lazily when its value is first needed. A lazy future is similar to a thunk (in the sense of a delayed computation).

Alice ML also supports futures that can be resolved by any thread, and calls these "promises"[7]. Note that this usage of "promise" is different from its usage in E as described above: an Alice promise is not a read-only view, and Alice also does not support pipelining for promises themselves. Instead, pipelining naturally happens for futures (including ones associated with promises).

Blocking vs non-blocking semantics

If the value of a future is accessed asynchronously, for example by sending a message to it, or by explicitly waiting for it using a construct such as when in E, then there is no difficulty in delaying until the future is resolved before the message can be received or the wait completes. This is the only case to be considered in purely asynchronous systems such as pure Actor languages.

However, in some systems it may also be possible to attempt to "immediately" or "synchronously" access a future's value. Then there is a design choice to be made:

  • the access could block the current thread or process until the future is resolved. This is the semantics of dataflow variables in the Oz programming language.
  • the attempted synchronous access could always signal an error, for example throwing an exception. This is the semantics of remote promises in E[9].
  • potentially, the access could succeed if the future is already resolved, but signal an error if it is not. This would have the disadvantage of introducing nondeterminism and the potential for race conditions, and does not appear to be a common design choice.
  • in C++0x, a thread that needs the value of a future can block until it is available by calling the wait() or get() member functions. You can also specify a timeout on the wait using the wait_for() or wait_until() member functions to avoid indefinite blocking. If the future arose from a call to std::async then a blocking wait (without a timeout) may cause synchronous invocation of the function to compute the result on the waiting thread.

Related constructs

An I-var (as in the Id programming language) is a future with blocking semantics as defined above. An I-structure is a data structure containing I-vars. A related synchronization construct that can be set multiple times with different values is called an M-var. M-vars support atomic operations to "take" or "put" the current value, where taking the value also sets the M-var back to its initial "empty" state[10].

A concurrent logic variable is similar to a future, but is updated by unification, in the same way as logic variables in logic programming. Thus it can be bound more than once to unifiable values (but cannot be set back to an empty or unresolved state). The dataflow variables of Oz act as concurrent logic variables, and also have blocking semantics as mentioned above.

A concurrent constraint variable is a generalization of concurrent logic variables to support constraint logic programming: the constraint may be narrowed multiple times, indicating smaller sets of possible values. Typically there is a way to specify a thunk that should be run whenever the constraint is narrowed further; this is necessary to support constraint propagation.

Relations between the expressiveness of different forms of future

Eager thread-specific futures can be straightforwardly implemented in terms of non-thread-specific futures, by creating a thread to calculate the value at the same time as creating the future. In this case it is desirable to return a read-only view to the client, so that only the newly created thread is able to resolve this future.

To implement implicit lazy thread-specific futures (as provided by Alice ML, for example) in terms in non-thread-specific futures, requires a mechanism to determine when the future's value is first needed (for example, the WaitNeeded construct in Oz[11]). If all values are objects, then the ability to implement transparent forwarding objects is sufficient, since the first message sent to the forwarder indicates that the future's value is needed.

Non-thread-specific futures can be implemented in terms of thread-specific futures, assuming that the system supports message passing, by having the resolving thread send a message to the future's own thread. However, this could be argued to be unnecessary complexity: in programming languages based on threads, the most expressive approach appears to be to provide a combination of non-thread-specific futures, read-only views, and either a 'WaitNeeded' construct or support for transparent forwarding.

Relation to lazy evaluation

Lazy futures, where the computation of the future's value starts when the value is first needed, are closely related to lazy evaluation. However, the term lazy evaluation is most often used to refer to an evaluation strategy for all computation in a language, whereas lazy futures represent specific values that are computed lazily, even in a language where computation is normally strict or eager. In C++0x such lazy futures can be created by passing the std::launch::sync launch policy to std::async, along with the function to compute the value.

Semantics of futures in the Actor model

In the Actor model, an expression of the form future <Expression> is defined by how it responds to an Eval message with environment E and customer C as follows: The future expression responds to the Eval message by sending the customer C a newly created actor F (the proxy for the response of evaluating <Expression>) as a return value concurrently with sending <Expression> an Eval message with environment E and customer F. The default behavior of F is as follows:

  • When F receives a request R, then it checks to see if it has already received a response (that can either be a return value or a thrown exception) from evaluating <Expression> proceeding as follows:
1) If it already has a response V, then
  • If V is a return value, then it is sent the request R.
  • If V is an exception, then it is thrown to the customer of the request R.
2) If it does not already have a response, then R is stored in the queue of requests inside the F.
  • When F receives the response V from evaluating <Expression>, then V is stored in F and
  • If V is a return value, then all of the queued requests are sent to V.
  • If V is an exception, then it is thrown to the customer of the each queued request.

However, some futures can deal with requests in special ways to provide greater parallelism. For example, the expression 1 + future factorial(n) can create a new future that will behave like the number 1+factorial(n). This trick does not always work. For example the following conditional expression:

if m>future factorial(n) then print("bigger") else print("smaller")

suspends until the future for factorial(n) has responded to the request asking if m is greater than itself.

History

The future and/or promise constructs were first implemented in programming languages such as MultiLisp and Act 1. The use of logic variables for communication in concurrent logic programming languages was quite similar to futures. These started with "Prolog with Freeze" and "IC Prolog", and became a true concurrency primitive with Relational Language, Concurrent Prolog, Guarded Horn Clauses (GHC), Parlog, Vulcan, Janus, Mozart/Oz, Flow Java, and Alice ML. The single-assignment "I-var" from dataflow programming languages, originating in Id and included in Reppy's "Concurrent ML", is much like the concurrent logic variable.

The promise pipelining technique (using futures to overcome latency) was invented by Barbara Liskov and Liuba Shrira in 1988[12], and independently by Mark S. Miller, Dean Tribble and Rob Jellinghaus in the context of Project Xanadu circa 1989[13].

The term "promise" was coined by Liskov and Shrira, although they referred to the pipelining mechanism by the name "call-stream", which is now rarely used.

Both the design described in Liskov and Shrira's paper, and the implementation of promise pipelining in Xanadu, had the limitation that promise values were not first-class: an argument to, or the value returned by a call or send could not directly be a promise (so the example of promise pipelining given earlier, which uses a promise for the result of one send as an argument to another, would not have been directly expressible in the call-stream design or in the Xanadu implementation). It appears that promises and call-streams were never implemented in any public release of Argus[14] (the programming language used in the Liskov and Shrira paper); Argus development stopped around 1988[15]. The Xanadu implementation of promise pipelining only became publicly available with the release of the source code for Udanax Gold[16] in 1999, and was never explained in any published document[17]. The later implementations in Joule and E support fully first-class promises and resolvers.

Several early Actor languages, including the Act series of languages[18][19], supported both parallel message passing and pipelined message processing, but not promise pipelining. (Although it is technically possible to implement the last of these features in terms of the first two, there is no evidence that the Act languages did so.)

List of implementations

Languages supporting futures, promises, concurrent logic variables, dataflow variables, or I-vars include:

Languages also supporting promise pipelining include:

Library-based implementations of futures:

References

  1. ^ Friedman, Daniel; David Wise (1976). "The Impact of Applicative Programming on Multiprocessing". International Conference on Parallel Processing, pp. 263-272.. 
  2. ^ Hibbard, Peter (1976). "Parallel Processing Facilities". New Directions in Algorithmic Languages, (ed.) Stephen A. Schuman, IRIA, 1976.. 
  3. ^ Henry Baker and Carl Hewitt (August 1977). "The Incremental Garbage Collection of Processes". Proceedings of the Symposium on Artificial Intelligence Programming Languages, SIGPLAN Notices 12. 
  4. ^ Promise Pipelining at erights.org
  5. ^ Promise Pipelining on the C2 wiki
  6. ^ Robust promises with Dojo deferred, Site Pen, 2010-5-3, http://www.sitepen.com/blog/2010/05/03/robust-promises-with-dojo-deferred-1-5/ .
  7. ^ a b "Promise", Alice Manual, DE: Uni-SB, http://www.ps.uni-sb.de/alice/manual/library/promise.html .
  8. ^ a b "Future", Alice manual, DE: Uni-SB, http://www.ps.uni-sb.de/alice/manual/library/future.html .
  9. ^ Promise, E rights, http://wiki.erights.org/wiki/Promise .
  10. ^ Control Concurrent MVar, Haskell, http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent-MVar.html .
  11. ^ WaitNeeded, Mozart Oz, http://www.mozart-oz.org/home/doc/base/node13.html .
  12. ^ Barbara Liskov and Liuba Shrira (1988). Promises: Linguistic Support for Efficient Asynchronous Procedure Calls in Distributed Systems. Proceedings of the SIGPLAN '88 Conference on Programming Language Design and Implementation; Atlanta, Georgia, United States, pp. 260–267. ISBN 0-89791-269-1, published by ACM. Also published in ACM SIGPLAN Notices, Volume 23, Issue 7, July 1988. doi:10.1145/53990.54016. 
  13. ^ Promise, Sunless Sea, archived from the original on October 23, 2007, http://web.archive.org/web/20071023111712/http://www.sunless-sea.net/Transcripts/promise.html .
  14. ^ Argus, MIT, http://www.pmg.csail.mit.edu/Argus.html .
  15. ^ Liskov, Barbara, Dsitributed computing and Argus, Oral history, IEEE GHN, http://www.ieeeghn.org/wiki/index.php/Oral-History:Barbara_Liskov#Distributed_Computing_and_Argus .
  16. ^ Gold, Udanax, http://www.udanax.com/gold/ .
  17. ^ Pipeline, E rights, http://www.erights.org/elib/distrib/pipeline.html .
  18. ^ Henry Lieberman (June 1981). A Preview of Act 1. MIT AI memo 625. 
  19. ^ Henry Lieberman (June 1981). Thinking About Lots of Things at Once without Getting Confused: Parallelism in Act 1. MIT AI memo 626. 
  20. ^ Rich Hickey (2009). "changes.txt at 1.1.x from richhickey's clojure". http://github.com/richhickey/clojure/blob/1.1.x/changes.txt#L130. 
  21. ^ Steve Dekorte (2005). "Io, The Programming Language". http://iolanguage.com/docs/manual. 
  22. ^ Seif Haridi; Nils Franzen. "Tutorial of Oz". MOzart Global User Library. http://www.mozart-oz.org/documentation/tutorial/node1.html. Retrieved 12 April 2011. 
  23. ^ Kenjiro Taura, Satoshi Matsuoka, and Akinori Yonezawa (1994). "ABCL/f: A Future-Based Polymorphic Typed Concurrent Object-Oriented Language -- Its Design and Implementation.". In Proceedings of the DIMACS workshop on Specification of Parallel Algorithms, number 18 in Dimacs Series in Discrete Mathematics and Theoretical Computer Science. American Mathematical Society. pp. 275–292. doi:10.1.1.23.1161. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.23.1161. 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Futures — may mean:Finance*Futures contract, a tradable financial contract *Futures exchange, a financial market where futures contracts are traded * Futures (magazine), an American finance magazineocial sciences*Futures studies, multidisciplinary studies… …   Wikipedia

  • Futures — En programmation, les notions de futures, promises ou delay font référence à des techniques de synchronisation pour certains langages concurrents. Il s agit d abstractions qui servent de proxy pour un résultat non connu au moment où il est… …   Wikipédia en Français

  • Whewell’s philosophy of science and ethics — Struan Jacobs ON SCIENCE Introduction Among the most prodigious of English minds of the nineteenth century, William Whewell (1794–1866) was at various times, and among other things, philosopher, intellectual historian, scientist, educationist,… …   History of philosophy

  • cash and carry trade — See basis trading LIFFE * * *    An arbitrage position that typically comprises a long cash position together with a short position in its respective futures contract. The trader buys the cash (or physical) commodity, such as coffee, and at the… …   Financial and business terms

  • Business and Industry Review — ▪ 1999 Introduction Overview        Annual Average Rates of Growth of Manufacturing Output, 1980 97, Table Pattern of Output, 1994 97, Table Index Numbers of Production, Employment, and Productivity in Manufacturing Industries, Table (For Annual… …   Universalium

  • List of Alvin and the Chipmunks episodes — The Chipmunks Go to the Movies redirects here. For the album of the same name that was recorded by Ross Bagdasarian, Sr., see The Chipmunks Go to the Movies (album). This is a complete listing of episodes from the 1983 animated television series… …   Wikipedia

  • Ockham’s world and future — Arthur Gibson PHILOSOPHICAL BIOGRAPHY Ockham was born in about 1285, certainly before 1290, probably in the village of Ockham, Surrey, near London. If his epitaph is accurate, he died on 10 April 1347. Yet Conrad of Megenberg, when writing to… …   History of philosophy

  • ANGELS AND ANGELOLOGY — ANGELS AND ANGELOLOGY. The entry is arranged according to the following outline: bible terminology angels as a group the angel of the lord in the hagiographa silence of the prophets ezekiel and zechariah daniel apocrypha among the jewish sects… …   Encyclopedia of Judaism

  • C++11 — C++11, also formerly known as C++0x,[1] is the name of the most recent iteration of the C++ programming language, replacing C++TR1, approved by the ISO as of 12 August 2011.[2] The name is derived from the tradition of naming language versions by …   Wikipedia

  • Parallélisme (informatique) — Pour les articles homonymes, voir parallèle. Blue Gene L cabinet., un des ordinateurs massivement parallèle les plus rapides des années 2000 En informatiqu …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”