Talk:Connection Machine

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Image Caption[edit]

I am pretty sure, the caption really should read "Computer History Museum in Mountain View, CA" 67.180.29.122 07:17, 11 October 2007 (UTC)[reply]


Where are they now?[edit]

What happened to Hillis Handler and their company? Are they still in business? Dan100 12:36, Mar 12, 2005 (UTC)

Danny Hillis went on to found Applied Minds, and I believe Sheryl Handler started a data-mining company called Ab Initio. Thinking Machines is not in business -- its software and hardware assets and patents were acquired by several companies. --Zippy 01:37, 31 August 2005 (UTC)[reply]

Performance?[edit]

Any benchmarks available?

There certainly are - many CM-2 and CM-5 machines made it into the top 500 supercomputers list based I believe on benchmark performance. I'm pretty sure there were published LINPACK benchmarks, and likely others. --Zippy 19:57, 17 January 2007 (UTC)[reply]
Erich and Jack started their Top-500 list after the last CM-5 was decommissioned. The difficulty was attempting to determine a fair comparison with a minimum amount of work. I seriously doubt a bit-serial version of LINPACK was ever written for any of the MPP architectures. 143.232.210.38 (talk) 23:43, 7 June 2010 (UTC)[reply]

Key Contributers?[edit]

On the Thinking Machines page there is reference to notable contributers. Would such a reference be relavent here? Something like Among the notable contributers were Stephen Wolfram and Richard Feynman.

“Besides Danny Hillis, other noted people who worked for or with the company included David Waltz, Guy L Steele, Jr., Karl Sims, Brewster Kahle, Bradley Kuszmaul, Charles E. Leiserson, Marvin Minsky, Carl Feynman, Cliff Lasser, Alex Vasilevsky, Doug Lenat, Stephen Wolfram, Eric Lander, Richard Feynman, Mirza Mehdi, and Jack Schwartz.”
I just looked at Danny Hillis's book, the Connection Machine, and there are a number of people mentioned in the acknowledgements beyond the above list. --Zippy 20:18, 2 March 2006 (UTC)[reply]

*Lisp major post-hardware product?[edit]

The main article says that *Lisp was the major product (left) for Thinking Machines once it stopped making hardware. I'd like to know more about this. I would have guessed that its other, more popular languages (C* and *Fortran, I think) would have had more users and more requests for support. Is this not correct? What was the history of *Lisp as a product after the last CM-5 rolled off the line? --Zippy 20:17, 2 March 2006 (UTC)[reply]

The statement that *Lisp was the major product left is wrong. *Lisp was no longer of major interest to Thinking Machines at the time the company folded. -- A *Lisp developer.

pronouncing * "star"[edit]

maybe it is correct to say that "*lisp" is pronounced "star lisp", but I think it's a weird way to put it. opinions? kzz* 18:22, 12 July 2006 (UTC)[reply]

It strikes me as an entirely natural way to put it.--Prosfilaes 15:17, 18 July 2006 (UTC)[reply]
Sounds natural to me. Palpalpalpal 12:00, 16 January 2007 (UTC)[reply]
"starlisp" is the correct pronunciation. That's how it was said at Thinking Machines in 1994-5. Somewhere I have a *Lisp reference manual that I believe gives this as the official pronunciation. --Zippy 19:28, 17 January 2007 (UTC)[reply]


Lights[edit]

"The CM-5 ... had a large panel of red blinking LEDs." -- Was there any functional reason to put actual blinkenlights on this thing, or was it just a design statement? -- Writtenonsand (talk) 03:25, 28 December 2007 (UTC)[reply]

It was entirely a design statement. The lights were latched to a memory location per processor or group of processors; however, most of the time that produced a pretty boring display (although you could see the dramatic effects of certain matrix operations). Most of the time, CM-2s in the demo areas were left running a program called "random-and-pleasing". The CM-5s actually implemented this through a hardware switch and microcode for the LED boards! Scolbath (talk) 20:35, 24 March 2008 (UTC)[reply]

Size[edit]

"The CM-1 had a length, width, and height of 1.5 metres. It was divided into 8 equally large cubic sections." -- This sounds like they found a magical way to fit eight 1.5-meter cubes inside a 1.5-meter cube. Hyperdimensional? Or should this be rewritten as "8 equal-sized cubic sections"? Sue D. Nymme (talk) 16:45, 13 May 2008 (UTC)[reply]

Well spotted. I've rewritten this to be slightly less awkward. Letdorf (talk) 09:20, 14 May 2008 (UTC).[reply]
Nothing wrong with alluding to hyperdimensional constructs! Feynman, did use some ideas of hyperdimensionality when designing the (CM-1/CM-2) interconnection, before the fat tree architecture was developed by Leiserson. Feynman, well-versed in quantum mechanics, would get my vote for figuring out how to "hide" most of the computer in a wormhole. rhyre (talk) 18:51, 17 March 2010 (UTC)[reply]

Why it didn't sell.[edit]

Perhaps someone could contribute a paragraph on how the Connection Machine was utterly useless for any real problems that real customers needed solved? 24.6.157.14 (talk) 00:28, 29 May 2008 (UTC)[reply]

First and foremost was that it was difficult to program on anything other than embarrassingly parallel test cases. Second, one site, LANL, could be argued that it ran one real problem (and was worth it); LANL brought two CMs. This is why a site list starts to be useful. Then you need to learn the application ran on it. That application might be classified (and still classified). Third, models didn't have instruction set compatibility. You must also understand that user application thinking typically ties up some important scientist. This user cycle (think) time isn't taken lightly. In part this is why CMs sited at PARC and UC Berkeley weren't used. Fourth, a lot of hype surrounded the machine, and only a minimum of science. A CM 2-D fluid flow around pipe cross-sections program was written at a time the Cray-2 was delivered and a "3-D pipes into the middle of a stream" simulation was written (insufficient memory on the CM-1 and embarrassing). Bad application timing; 3-D more interesting. Fifth, the first languages were versions of LISP and C. They were late with a Fortran compiler (originally flaunted that only to come back tail between legs, very apologetic). Sixth, hardware floating-point was only added later. Software floating point on the CM-1 was so slow (Danny didn't learn the lesson from the Goodyear MPP), you can tell a programmer's perspective by whether they counted CPUs or FPUs. 143.232.210.38 (talk) 00:01, 8 June 2010 (UTC)[reply]


It’s hard to know where to begin with such a biased question by 24.6.157.14 , and the apparently dubiously informed response below it by 143.232.210.38 . I suppose the answer to the question depends your metric of “utterly useless”. But let me start by saying that there appears to be no evidence, whatsoever, of extremely large computational problems that are not intrinsically parallel or at least parallelizable.

The world is full of “embarrassingly parallel” phenomena, as some of the TMC people used to claim. Some examples of this are: weather prediction, materials simulation, protein folding, high speed global market trading, neural simulations, fluid flow, seismic inversion, quantum electrodynamics, combinatorial chemistry, high throughput genomics, rendering videos, code cracking, simulating internet traffic, airplane wing simulation, modeling your brain, modeling the air you’re breathing, modeling your country’s electrical grid feeding the power into your computer, modeling the plate tectonics of the continent where your building is located, modeling the stars in your galaxy.

As a purely technical side note, I personally ran multiple programs on CM1, CM2, and CM5 machines, just by recompiling the code and running it. Of course one can always write code that is incompatible across different versions of computers, or in languages not available on all versions. Also, since it was possible, though wasteful, to run an independent copy of UNIX in each processor of a CM5 (something which I did a few times), as a matter of logic, the limitations of a CM5 are closely coupled with the limitations of UNIX, in this situation. Additionally, I certainly wasn’t apologetic about TMC people implementing the parallel languages Paris, *Lisp, & C* and several experimental languages before implementing CMFortran, and no one I knew at TMC was apologetic, either. And, if I had a tail, it certainly wouldn’t be between my legs about people not having CMFortran before the CM2 was developed.

Originally, one of the interesting concepts about the Connection Machine was to start by picking a set of artificial intelligence problems to solve. Danny Hillis was, after all, one of Marvin Minsky’s graduate students. After picking the set of problems to solve, they and others at the MIT AI Lab tried to understand the kinds of operations, desirable to solve the problems. Then the machine was designed to, among other things, implement these operations well.

This last step has a homologue in lisp machines. Lisp machines were designed to run lisp efficiently and provide an environment for lisp programming. Biologic brains also appear to have many architectural features which at least partially optimize specific computation functions. See, for example, the architecture of the occipital lobe or the architecture of the retina.

Since, for years, Thinking Machines Corporation had the rather hokey motto, “We want to build a machine that will be proud of us.”, and also because it was named “thinking machines”, and it was a spin off from the Artificial Intelligence Lab founded by gurus of artificial intelligence, it should be obvious that initially they were attempting to build machines to implement various artificial intelligence schemes.

Soon, however, many others wanted to try other kinds of problems on Connection Machines. And there is a pretty simply reason why. Back then, many researchers and some industrialists understood that the kinds of computers around were not sufficient to solve their problems. Many attempted to solve their problems on Connection Machines, if only out of frustration with the limitations of the scalar or minimally parallel computers around at the time. Occasionally, some found the answers they sought on parallel machines. Some even found whole new ways of understanding their problems.

Dow Jones put years of the Wall Street Journal on a Connection Machine as a full content searchable database. Aramco did modeling of oil reservoirs. Schlumberger did seismic inversion. Oregon State University did shallow water modeling. United Technologies did helicopter wake modeling, molecular dynamics, and elevator scheduling simulations (using genetic algorithms). ICFD Japan did some graphics for the Mt. Unzen eruption, internal combustion engines, earthquake simulations; and some fluid flow work for a local car company (diagnosing errors in the code the company had used to design passenger compartments). Wuppertal University did thermodynamics, industrial robot modeling, and quantum electrodynamics.

Karplus’ group at Harvard ported the multi-thousand line kernel of their molecular dynamics code, Charmm, to the Connection Machine. Then, as now, fully ab initio protein folding is, as a practical matter, beyond supercomputers, but researchers try anyway. At this moment, I don’t recall precisely what AMEX, NCSA, HSU, JPL, NSW, IPG, RWCP, NCAR, etc… did with their Connection Machines, but it is a fair assumption that there was a wide variety of problems attacked.

Could all the companies and universities and governments that bought the machines have been mistaken in doing so? If you answer: “Yes, they were all mistaken,” then ask yourself why so many supercomputers today are parallel machines. As I write this, all the fastest supercomputers in the world are what Thinking Machines people would have called massively parallel. In this sense, the MPP, the DAP, the Connection Machine, and their ilk were early versions of what is currently recognized the standard way to build a supercomputer. [All this might change with quantum computers, but massively parallel quantum computers will still be faster than scalar or small clusters of quantum computers.] Also, note that parallel design has become the de facto standard in high end video card design. Such video cards are effectively highly parallel computers often attached to PCs. In short, “massively” parallel machines, such as the Connection Machines, were early explorations of what has become *the* recognized way to design extremely fast computers for essentially every application. Timeparticle (talk) 23:06, 22 November 2014 (UTC)[reply]

Last one or two chapters of Hillis's book[edit]

The last chapter or two were Hillis's ideas about how this architecture could help to reformulate computer science to look more like mathematical physics. Instead of writing code, we'd be writing equations like the wave equation or the diffusion equation, describing phenomena in space and time, and the processors would populate the area of interest like a finite-element model and run the calculation.

I don't know if it deserves mention in the article, but it was very poetic in its way. I wish I knew where I put that book, I'd like to read it again some day. -- WillWare (talk) 14:14, 2 December 2009 (UTC)[reply]

How fast is this computer when it compare to an ordinary computer today?[edit]

Like an ordinary computer Intel i5 CPU.119.85.245.211 (talk) 23:50, 10 April 2013 (UTC)[reply]

Citation needed on Maya Lin[edit]

I was curious about the note that Maya Lin did the external design, especially since there wasn't a citation in the article. Searching on Google I mostly see citations back to this Wikipedia article. I did find an A-to-Z book that mentions Maya Lin's contribution in passing, but I think a more authoritative reference would be useful. In case that one is a good place to start: http://books.google.com/books?id=pNmm_Axdor8C&pg=PA113&lpg=PA113&dq=%22maya+lin%22+%22connection+machine%22&source=bl&ots=xSnU8eXFfz&sig=olII6rQUrDO7wUR7RIN6jb23hZo&hl=en&sa=X&ei=PZrlUo_YM8HwoASstIGQCA&ved=0CDcQ6AEwAg#v=onepage&q=%22maya%20lin%22%20%22connection%20machine%22&f=false

Npdoty (talk) 23:35, 26 January 2014 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Connection Machine. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 11:28, 6 December 2017 (UTC)[reply]