The Machine: A new kind of computer:
I swear I worked on one of these once ... "Pacific", the IBM S/38, somewhat descendent of IBM's abortive attempt to re-create computer architecture post 360, "Future Systems".
There is a part of me wants to dust off the old resume and beg to climb on board such an ostentatious endeavor ... "big projects", "big architecture", "millions of KLOC" and all that. I'm pushing 60, tilting at such windmills is a younger mans sport -- and yet.
There are not a lot of anything approaching "details", but it certainly sounds like the core of this thing is some version of "Single Level Store"(SLS), one of the leading "features" (and and achilles heel) of the S/38 along with the concept of "A High Level Machine Architecture". Very useful as an abstraction -- see Java Virtual Machine and many others since, the devil is in the translation of virtual to real. Real hardware only runs the real -- and there is the rub.
An engadget article gives some more "detail" that makes it at least sound like this time the distance between the hardware and the SLS is tiny ... "memristors" make the memory actually single level. No actual computer memory and backing store -- although one assumes there are still faster caches and register stacks in the architecture, though one would have to see it to be absolutely sure.
But how "real" is the abstraction presented to the OS programmer, the compiler writer (really the back-end optimizer writer), and finally the application programmer? Does it "show through" to the application layer? If not, then the issue becomes how well all this works with unmodified plain old Unix programming model code of various flavors. If it does, then there is maybe more hope for a true revolution, but it has to be BOTH very good, AND very fast ... as in VERY easy to program and runs like a banshee, else not even the middle ware guys (Web Servers, Data Bases, etc) will bother to create new versions of their products for it.
If anybody can write code that is actually not aware of "instantiation", the reading of data from disks in file or DB formats into "records / buffers / data structures" that can potentially be a step forward, though as many theorists have discovered it is also very much a two edged sword. Sever the human programmed in understanding of "temporary" (in memory) and "permanent" (written to media like disks) and all sorts of problems ensue. How does the underlying OS / programming language decide what is "garbage", ie working storage used by the program during execution, but not of use after it is done ... in some cases this is obvious, but if the programming model is changed to not make the differentiation clear, it gets very complex.
What does "permanent" mean? Is it ALL mirrored or otherwise protected? Huge amounts of real industrial programming (and resources like time/hardware/etc) are devoted to "checkpointing" where the program(s) were at so the ACID (Atomicity, Consistency, Isolation, Durability) properties can be maintained through loss of power, processor failures, bus / communication network failures, disk failures, etc ... There is lots of code to make that happen. Does it change? If so, even for MUCH better, the conversion will be large.
I could likely wax on for a good deal longer. Clearly I'm curious and will be watching as this unfolds.
'via Blog this'
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment