Jump to content

Talk:Software transactional memory

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 193.77.126.73 (talk) at 21:52, 13 February 2007 (How are commitments made atomic?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

I've received some negative feedback about this article from friends. I'm currently researching the area and will improve it in the future. Deco 21:50, 3 February 2006 (UTC)[reply]

What exactly is wrong? countryhacker 20:55, 4 February 2006 (UTC)[reply]

Sorry for not answering, didn't see this. In short it was missing a lot of information, and the English was also a bit messy. I've rewritten it. I hope the original author is not offended - there's plenty of room for additional expansion still. Deco 23:47, 8 February 2006 (UTC)[reply]

KVD, February 2006: The article first mentions that STM has a performance disadvantage on a small number of processors, and then it goes on to say "in addition to its performance benefits, ...". This is a contradiction, or at least confusing. Is the net effect of STM a performance gain or performance loss? Or is there a minimal degree of concurrency (e.g. a minimal number of processors) beyond which STM becomes interesting?

As far as I remember, WORST case (when all but one transactions fail) is O(n) where n is number of concurrent processes. This is definitely not "twice". Maybe it does happen rarely in real world, but is worth mentioning anyway. countryhacker 09:24, 24 February 2006 (UTC)[reply]

It's a loss on small numbers of processors, but a gain on larger numbers. Sorry if the wording was unclear. You're right that the theoretical worst-case is considerably worse than the typical overhead observed in experiments. Deco 22:52, 19 June 2006 (UTC)[reply]

Implementation issues

Quoted from the article:

atomic {
    if (x != y)
        while (true) { }
}

Transaction A

atomic {
    x++;
    y++;
}

Transaction B

Provided x=y initially, neither transaction above alters this invariant, but it's possible transaction A will read x after transaction B updates it but read y before transaction B updates it, causing it to enter an infinite loop.


Hmmm - isn't it the point of transactions that the intermediate state "x == y+1" is NOT visible anywhere outside of transaction B? "Commit" is an atomic operation, isn't it??? --84.137.86.147 22:25, 4 March 2006 (UTC)[reply]

Sorry for the fuss. Now I understand that the situation can occur if the thread of transaction A first reads x, then is suspended (for any reason), then transaction B is run completely and finally A resumes and reads the value of y... --84.137.86.147 22:34, 4 March 2006 (UTC)[reply]

Right. The transaction A has read inconsistent state committed by transaction B, which prevents transaction A from committing successfully, but that doesn't mean it'll terminate. Deco 22:52, 19 June 2006 (UTC)[reply]

This isn't true for implementations that guarantee isolation (the I in ACID). E.g. if you use optimistic evaluation and store a transaction log, upon committing this log you would A) Ensure that no read variables have been updated by other transactions B) Ensure that none of the written to variables have been read by any ongoing transaction. This ensures that each transactions acts completely in isolation (conceptually). So if A reads x, then B runs completely, B would fail to commit because x has been read by the ongoing transaction A.

That's true. Whether or not this leads to nontermination depends on the specific implementation of STM. Deco 00:47, 9 January 2007 (UTC)[reply]

Since we are talking about TRANSACTIONAL memory, I guess a correct implementation must implement at least ACI (that is, ACID without the D) semantic. In an ACI context, the consistence problem documented in the page is not an issue. Somebody should edit that section out. Jcea 11:58, 9 January 2007 (UTC)[reply]

To what section are you referring? There is no consistence problem with any implementation of STM, including the one I originally described - if the transaction has read inconsistent state it will abort if it completes. The issue is nontermination, which necessitates a timeout. Deco 18:52, 9 January 2007 (UTC)[reply]
I'm talking about the section showing partial transactions (partial update states) visible to other processes. That section talks about seeing "impossible" states because of that. Since we are talking about "transactional" memory, a correct implementation would be ACI (ACID without the D) and so other concurrent threads should not see partially updated states. A "real" transactional memory implementation should detect the inconsistence when reading "stale" data, at that time, long time before the commit. Jcea 15:31, 10 January 2007 (UTC)[reply]

I find this entire section wrong: We are talking about transactional memory. By transaction definition, we have Atomicity and Independency, so the risk of reading "stale" data is not an issue. I delete the entire section Jcea 19:30, 26 January 2007 (UTC)[reply]

I have restored the section. The paper on which it is based specifically describes this scenario in very similar terms, and claims to implement a software transactional memory. If you contest this claim, then you need to take that up with the original authors. I've emphasized many times that the reading of stale data is possible only because reading is optimistic in this implementation, and no transaction having read such data ever commits. Not all STM implementations guarantee that stale data is never read, only that transactions reading it cannot succeed, which preserves the ACI properties. Deco 19:40, 26 January 2007 (UTC)[reply]

Implementations

What exactly qualifies an STM implementation as being "large-scale product-quality"? I know a number of people who are using Haskell/STM in a non-research (at least, not research on STM) context. In other news, Audrey-tan has incorporated STM into Perl 6. (Is this product-quality yet? Or large-scale enough?) Liyang 21:40, 19 June 2006 (UTC)[reply]

Honestly, I wasn't familiar with those implementations. The current claim might be misleading; feel free to edit it, but I personally wouldn't tell anyone that STM is ready for major software projects. Deco 22:52, 19 June 2006 (UTC)[reply]

How are commitments made atomic?

I don't have a comment, I just have a question, that might be answered here... well, I guess I better ask it here than on the main page... When a transaction commits, how does it make its "commitment" atomic to other threads, NOT currently in a transaction? Or should all accesses to objects possibly in a transaction check (that is, perform some kind of read-barrier - performance penalty)? Tom Primožič 19:10, 13 February 2007 (UTC)

Good question, and worth adding to the article. This varies by implementation, but usually involves some kind of "quick swap" of the object reference to point to the new version of the object. If multiple objects are involved, it may (again in some implemenations) be possible for a transaction to read incompatible state partway through the commit, but any transaction that does so will itself be doomed to abort, and so its behaviour is irrelevant. For this reason a read barrier is not needed. Deco 21:40, 13 February 2007 (UTC)[reply]