Message boards :
Number crunching :
Xeon Phi (aka Knights Corner, MIC)
Message board moderation
Author | Message |
---|---|
Ex: "Socialist" Send message Joined: 12 Mar 12 Posts: 3433 Credit: 2,616,158 RAC: 2 |
Xeon Phi, (MIC is finally coming to fruition) By the end of this year we should start having a real idea where this is going. I am looking forward to seeing a CPU solution to parallel processing. One of the interesting points of the Xeon Phi, as opposed to GPUs is that the Phi can be used as a dedicated cruncher, or as a system co-processor (I'm assuming you'll need a Xeon CPU to pull this off). It will be running itself on a Linux, which I've heard you will even be able to SSH into. (I'm hoping for a webGUI ;-)) I'm waiting hopefully to see if it will be released in a x8 package (in addition to the x16 pictured below), for use in existing server systems that lack pci-e x16. More Info. #resist |
.clair. Send message Joined: 4 Nov 04 Posts: 1300 Credit: 55,390,408 RAC: 69 |
I will wait for real world comparisons against the likes of 680 and 7970 before i sell an arm and a leg to pay for Intel`s Tesla. Nice pice of kit though. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Well with the way PCI-e connectors work, if you didn't care about any warranties, you could take an x16 card and actually cut the connector to make it fit into an x8 slot. Conversely, you can cut the end of an x8 slot so you can put an x16 card in there. Either way will work. Throughput will be half of the theoretical max for x16, but as we've seen around here on the forums, GPUs don't slow down much at all when put on x1. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Well with the way PCI-e connectors work, if you didn't care about any warranties, you could take an x16 card and actually cut the connector to make it fit into an x8 slot. Conversely, you can cut the end of an x8 slot so you can put an x16 card in there. Either way will work. Throughput will be half of the theoretical max for x16, but as we've seen around here on the forums, GPUs don't slow down much at all when put on x1. I would probably feel safer screwing a heavy duty paper clip into my soldering station iron and carving out a notch in the connector. Rather than trying to dremel off the connector edge. I've been thinking of trying that on my MoDT system that only has a x1 connector since that system is starting to get wonky I wouldn't feel bad if I killed it. Once these come out and some make their way into the hands of developers for app testing it will be nice to see how they compare in performance/cost. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
fataldog187 Send message Joined: 4 Nov 02 Posts: 42 Credit: 1,271,261 RAC: 0 |
Well with the way PCI-e connectors work, if you didn't care about any warranties, you could take an x16 card and actually cut the connector to make it fit into an x8 slot. Conversely, you can cut the end of an x8 slot so you can put an x16 card in there. Either way will work. Throughput will be half of the theoretical max for x16, but as we've seen around here on the forums, GPUs don't slow down much at all when put on x1. Why wouldn't you use one of those PCIe ribbon riser/extension things instead? |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Well with the way PCI-e connectors work, if you didn't care about any warranties, you could take an x16 card and actually cut the connector to make it fit into an x8 slot. Conversely, you can cut the end of an x8 slot so you can put an x16 card in there. Either way will work. Throughput will be half of the theoretical max for x16, but as we've seen around here on the forums, GPUs don't slow down much at all when put on x1. After looking at my board I wouldn't be able to fit a x16 card even if I mod the connector. So one of those ribbon solutions would have to be the answer for that system. If I really wanted to go for it. I don't know if I really want to spend the $4 though. lol Really I have been thinking about replacing the system with a newer board that has a x16 slot. Such as a Super Micro X9SCV-Q, but I am hoping for a newer release using a 70 series chipset. And also that it will work one of those Xeon Phi boards in it. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Ex: "Socialist" Send message Joined: 12 Mar 12 Posts: 3433 Credit: 2,616,158 RAC: 2 |
I will wait for real world comparisons against the likes of 680 and 7970 before i sell an arm and a leg to pay for Intel`s Tesla. Sounds like these bad boys are going to take some serious overhead within, to run properly: Intel said its Xeon Phi boards will have at least 8GB of GDDR5 memory, which is a third more than current generation Nvidia Tesla cards. Hazra told The INQUIRER that local memory will be what determines the overall performance of Xeon Phi, That tells me, that perhaps cost-wise GPUs may continue to be the better option... For now ;-) And as far as hacking my slots or buying adapters, nope not in my little server, I love her and respect her too much, not gonna happen. If it's not a stock fit it won't be used. #resist |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I will wait for real world comparisons against the likes of 680 and 7970 before i sell an arm and a leg to pay for Intel`s Tesla. Let's hope they are priced better then the current NVIDIA Tesla boards. As the M2090 runs in the neighborhood of $2500. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
-BeNt- Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 |
I doubt they will be priced better if at all than the Tesla cards. Depends on the speeds I would imagine. If they are on par or close to a Tesla performance wise they will be in the same neighborhood. If they are faster, get ready for $4-5K per. Traveling through space at ~67,000mph! |
Ex: "Socialist" Send message Joined: 12 Mar 12 Posts: 3433 Credit: 2,616,158 RAC: 2 |
I doubt they will be priced better if at all than the Tesla cards. Depends on the speeds I would imagine. If they are on par or close to a Tesla performance wise they will be in the same neighborhood. If they are faster, get ready for $4-5K per. Ouch... This is starting to sound more and more like a dream than a reality. Even supposing $2K, that makes that compute card more expensive than my entire little Xeon server. #resist |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
I doubt they will be priced better if at all than the Tesla cards. Depends on the speeds I would imagine. If they are on par or close to a Tesla performance wise they will be in the same neighborhood. If they are faster, get ready for $4-5K per. That's been my major objection to the Tesla cards for crunching. I read somewhere that nVIDIA is having to take the cost/benefit ratio in consideration with regards to their Tesla offerings vs their GeForce offerings. I doubt that we'll see inexpensive MICs as PCIe cards, maybe ever. It will be interesting to see how it evolves, though. Maybe something like PCIe (or its replacement) external connections to stand-alone MICs you stack next to your workstation? Maybe something really wild I can't even imagine will come out of this. eSATA seemed pretty esoteric to me after dealing with external drives on a parallel port, so I'm easily impressed, whatever happens. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I doubt they will be priced better if at all than the Tesla cards. Depends on the speeds I would imagine. If they are on par or close to a Tesla performance wise they will be in the same neighborhood. If they are faster, get ready for $4-5K per. You can already get external PCIe enclosures & I have read that that will be some Thunderbolt enclosures soon. Unless the Thunderbolt enclosures have already some out. What I want now is some way to connect machines together so that a machine could be presented to the system as a coprocessor. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
Ouch... This is starting to sound more and more like a dream than a reality. I guess the gamer's segment will always win in the number of customers to the high-computing's segments... An also, if GPU becomes too expensive dady will not buy it... If a bussiness requires an MCI, the boss has to buy it... What I want now is some way to connect machines together so that a machine could be presented to the system as a coprocessor. Is not that what BOINC does? :D |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Ouch... This is starting to sound more and more like a dream than a reality. In the current universe, where I live, BOINC can not tell the machine it is running on to use the one next to is as a coprocessor. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
In the current universe, where I live, BOINC can not tell the machine it is running on to use the one next to is as a coprocessor. You need the BOINC server apps to do that... ;b (All our hosts are a kind of copocessors of the projects servers... anyway I was not trying to be really serious...it was just a kind joke...) |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
Ouch... This is starting to sound more and more like a dream than a reality. I'm guessing your enthusiasm temporarily impaired your reading comprehension!;) From the Enquirer article: Intel said its Xeon Phi boards will have at least 8GB of GDDR5 memory, which is a third more than current generation Nvidia Tesla cards. And from the AnandTech article that first got your attention. ...Xeon Phi co-processors won’t be available until the end of the year – if not next year – but regardless the timing is such that Intel will be going up against NVIDIA’s GK110-based Tesla K20, which is similarly expected by the end of the year.So definately not a GeForce series competitor... Press releases are always full of fanfare and usually quite boring, but I thoroughly enjoyed the Xeon's: Intel Xeon Processors E5 Achieve Fastest Adoption, Announcing Xeon Phi Co-Processors Here's what I liked: The "SuperMUC" supercomputer at LRZ in Germany, which ranked fourth on the list, delivers 2.9 PetaFLOPs of performance, making it the most powerful in Europe, as well as the largest installation based on the new Intel Xeon processors E5 family. Did Intel just say that they are going from 3PFLOPs to 1000 in just six years!? Exponential indeed... Even if they do it in eight! Providing a two week weather forecast with the same level of accuracy as in a 48-hour forecast, or mapping the human genome within 12 hours at less than $1000 as compared to the current two weeks at $60,000, are two examples of the many challenges that HPC will address with more compute capacity. The insatiable need for performance has driven the tremendous growth of HPC processor shipments over the last five years, and Intel predicts this growth will reach more than 20 percent CAGR in the next 5 years. Intel forecasts2 [sic] that the most powerful supercomputer in 2013 will feature the amount of CPUs greater than 1 percent of its own server CPU shipments in 2011. Thats 28x faster (pretty much consistent with the exascale claim above) and more importantly 60x cheaper. Fun times:) Sure this stuff is out of our league, but it's reassuring to know the competion is fierce. Plus it'll trickle down to us setiheads one way or another. (You know, it never crossed my mind that the "48hr weather precision" could be a computing bottleneck! For the past few years I just assumed it was a modeling/chaos problem.) |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
Ouch... This is starting to sound more and more like a dream than a reality. No, only the sheer amount of data/time requires a powerfull, maybe 10TFLOPs, maybe even more. And modelling/gaos problems,, don't have to calculated, whithin 14 days. Whether forecast has to do so. I'll follow the story it is the future also find it interresting, wonder what price-tag this baby will have and what it's power use is. (I'd calculated that you can compaire this to 256 GTX580 GPUs)! |
Ex: "Socialist" Send message Joined: 12 Mar 12 Posts: 3433 Credit: 2,616,158 RAC: 2 |
external connections to stand-alone MICs you stack next to your workstation? You can do that with Tesla cards, I know it's possible to literally build or buy a Tesla compute "box" that wires into your rig. Again we're talking multiple thousands of US dollars. What I want now is some way to connect machines together so that a machine could be presented to the system as a coprocessor. Oh how sweet it would be... Shouldn't we be able to do this natively with *nix based systems??? Now I'm curious. #resist |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
There was a software tool called Parallel Virtual Machine by the University of Tennessee which allowed this. Once I connected a Bull/Mips minicomputer and a SUN SparcStation using this tool. Tullio |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
external connections to stand-alone MICs you stack next to your workstation? If you mean a beowulf cluster cluster configuration, then yes nix systems can do that without to much effort. However BOINC uses shared memory to talk between the client and the apps. Which this cluster configuration can't give you across nodes. If you want shared memory access at the moment you have to spend tens of thousands of dollars per node. Perhaps someone could create an app that would work in this configuration. BOINC could run a 'controller' app, communicating with it via shared memory segments, and the 'controller' app could talk to a science app that runs on the node As BOINC doesn't have any way to set any kind of controls on each instance of an app it runs. There would probably need to be a layer in between. The 'controller' app would talk to it. Then it would have the job of working out which instance talks to which node and such. Another approach, which has been brought up before, is a super client version of BOINC. Where you have one client that runs a master client app and then all of the nodes run a client app that only talk to to master client app. Either way would require quite a lot of coding I would imagine. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.