gzader / Member

Forum Posts Following Followers
3576 199 55

gzader Blog

I REALLY hate Vista. Goodbye MS.

Let me start this off with MS gave me Vista x64 ulti for free. Yep. One of the many conferences I attended I would guess. It's not the first time they gave me Vista free. I have a biz edition free as well. I had more Vistas but I gave them away. I intended to stay with XP for a very long time. I have a lot of free MS stuff thats just sitting there.

So going into it, I'm happy I dont have to spend $75-$150 for Vista. Let me tell you, I am VERY happy I haven't spent any money on this. I don't think I could stay sane if I had.

So lets put it this way. I quit. I'm done. MS it's over. It's not me, it's you. Really. I tried to change for you but I just can't do it anymore.

I've found someone new. Ubuntu is new, it's exciting. I think this could be something really special. It's going to take some time but I think maybe we were made for each other.

It'll take some time, and sadly, I have to keep you around for a while, if only for the kids. (Do you even remember them? Fallout, Fear, Bioshock, Crysis?) You don't even pay attention to them anymore. It's just so much work to get you to spend a little time with them. You used to care about it. It used to mean something to you, what happened?

It's like you just stopped caring. You got so wrapped up with being you that you didn't care about anyone anymore.

I don't know what I'll do with the kids. I'll find a way to get by. I did great before you came along and I suspect I'll do even better once I get used to you being gone.

Worst mistake ever...

Okay, so you go through life, and you make mistakes. Some mistakes are small, others are larger. If you're lucky, your worst mistake ever doesn't effect anyone else. If you're not.... people get hurt.

My worst mistake ever just happened.

I went to Captain John's Crab House (and Seafood Buffet) in Virginia Beach. Let me start off by saying that three miles away and for two dollars more there is a much better seafood buffet called Captain George's. I had not been to Captain Johns in years, 15 or so. Back then they had lobster on the buffet. It was at least time for a review.

But it's not the same place at the least they've moved. It's likely not the same owners, if it is, something has gone very wrong.

Clue #1: The plates they served the snow crab on come from a different restaurant (Boston Lobster House). As I ate my meal, I longed to be at the Boston Lobster House, or anywhere else for that matter. (Oh to be back in math ****learning how to do fractions for the first time while having a long painful dental process done by an intern on their first day. A far, far better place.)

Clue #2: The "hair" found in the macaronni and cheese. We'll to be fair, there was no real cheese in the macaronni. It was pretty much just wet noodles. The hair was of a type that Clarence Thomas might note.

Clue #3: The Urinal doesn't flush in the men's room. It's a minor thing, having toilets that flush. But when you want that transient bus station ambieance, well...

To be fair, the crab was really good. The she-crab soup was good. Everything else was... well.... lets just say my daughter was very happy that they didn't do anything to the pudding. It's sad when you mess up pudding as it comes from a can as is. Thankfully they left it as is and it's likely one of the best things on the menu.

So you're thinking, "Ha! Seafood Buffet! You should know better than to order a buffer when you can get it fresh." Normally I'd agree with you, but this place was known for their buffet. It used to be good, it should STILL be good.

It's not.

Just say no, spend the extra $2 and go few miles up Laskin road to Captain Georges. Let Capt. John go down with the ship. It's a far better fate.

Clue #4: I took my daughter to McDonald's afterwards so should could get a nice meal. Sad.

UPDATED: More Goofy Hobbies

Once again, I'm in the inq.. On the letters page today. 

:)

 

I -really- need to post some build pics soon.

 

UPDATE:

On the inq again....

http://www.theinquirer.net/default.aspx?article=40095

 

AND I think I'll be in Information week soon, and.... I think I got mentioned on the radio last week, but I missed when they would have done it, so... no count there.... 

 

Goofy hobbies

Ahhh dumb fun. I got quoted on the inquirer. Want to find it? Its within 48 hours of this date.

See, I have this odd hobby. I like getting my name said on the radio or put in print. If you listened to 'my' station here, you'd here my name about once a month. If you read the same magazines as I do, you'd see my name about three times a year.

Now I've added websites. Note, it can't be just a simple comment form, it's got to be in the article to count.

C'mon, like that's the weirdest hobby you've ever heard of!

Future predictions and rumors.

These are some random thoughts I had over the weekend, finally took a minute to write a few of them down.

AMD Barcelona may be in production right now or should be in the next 30 days. If you figure the turn around time on chips AMD needs to be in production to have chips for delivery in June/July. However, AMD could announce that they are ready to take orders in June/July in which case they could delay slightly. However, juding by pent up demand (yours truely delaying several server and desktop purchases for the release of these new chips) it's very possible that they are building some inventory volume right now in order to meet demand.

So why isn't AMD talking about it? They've got a lot of old tech they are still trying to EOL (end of life) not just because their is less demand for it, but also because AMD is capacity contrained, so by removing chips that are less profitable and or less desirable to the market they free up production for the more desirable chips. Of course special requests (Dell's elcheapo lines) will keep some of those less desirable processors in market and in production.


AMD 4x4 Chipset. AMD will build one sooner or later. First, the whole AMD / Nvidia question was to have an across the board chip set like Centrinio or Viiv (rhymes with dive). With 4x4 being it's workstation c.l.a.s.s. (you cant say that word otherwise.. silly gamespot!!) product, it makes sense to have its chipset there. But it also makes sense to make it's chipset better. If you look at the design on the 4x4, one processor has to loop through the memory controller of the other in order to get to the outside world. Thats not terrible, but it does add latency and some potential perhipheral contention. Whats the solution? Take the chip set and give it a crossbar design from the native quad core chips.

This would allow the chipset to talk to both sockets directly without the current loop through. Further if you want to get really annoying. You can have processor to secondary chip set directly talk as well. This would allow socket 1 to talk to pcie16-A and socket2 to talk2 pcie16-B directly without having to share bus space. Granted with HT3.0 it's less of an issues, but the latency that builds up is always a concern.

Now add to that AMD's 'alien processor' slot. It lets a third party processor drop into an AMD motherboard. Fusion would be a good first candidate but it's going to need a lot of bandwidth to talk to the outside world, it's one more reason to have direct connectivity to the chipsets and processors.


Microsoft is a thorn to PC game developers. Right now, the bulk of the PC gamer community runs on DX9. DX10 is way better but there's not a lot of hardware for it in the market place. So DX9 rules. Game developers have to think about three targets now when writing software. DX9 on XP, DX9 on Vista, and DX10 on Vista. It's more work and it makes developing and supporting the PC more expensive. Add in that nVidia and ATi would both love it if XP could run DX10 and you have a mess.

MS will feel pressure to get DX10 on XP from the hardware vendors. Game developers would LOVE to only deal with DX10 and DX9 on XP and offer some token support to Vista until it actually has some market presence. Will MS backpeddle and release Xp-DX10? Maybe. Vista's adoption or lack there of will drive that.


AMD limited edition 45nm server chips? AMD unlike intel isn't in the race by itself. It has tech partnerships with others as well as outside fabs. Some of those outside Fabs are running at 45nm for some production. Depending on your view of AMD it's either just now getting into 45nm pre-production or it already had the ball rolling while 65nm was still in pre-production. If they're in production now, and the general transition time is one year for the first yeild-ready parts then we might see the first chips hitting in the Oct-Dec range. If they're just getting everything rolling now, then we're looking at a March-July range in 2008. AMD doesn't need 45nm to compete yet. It's 90nm parts were for the most part power/performance equal to most of intel's 65nm line. The new revision processors will likely play a similar game with intel's first 45nm line. AMD wants to win clearly in the power / performance race, so a push for 45nm tech makes sense.


Keep watching.

DRM, Fair Use, HDCP and Vista, Part 1

DRM, Fair Use, HDCP and Vista, Part 1

This started off as a series of posts over in the Over Clocking Union. I've put it all together here because I think it would be good to have it all in one place.

This is just a quick primer to give you a little background on the where and why of some of the related DRM technologies and why some people do and don't want it.

Lets get some basics out first. 2k and 4k.
Movie production has moved pretty fully into the digital domain. The original film (when film is used) is scanned in at what's known as 2k or 4k. These are short hand for basic resolution. 2k and 1920x1080 are pretty close resolutions. 4k as you might imagine is higher end. I'm keep this general because the detail specs don't really matter.

An average movie is done at 2k, a high end, quality matters gets done at 4k. Spiderman 2 as an example was (if I remember correctly) scanned in at 4k, but due to cost, the effects were done at 2k as an example.

HD-DVD and BluRay allow movies (HD movie for short) to be displayed at 1920x1080 which as stated above is pretty closed to 2k. The primary difference is that the HD movie has been compressed by one of several potential codecs which reduces quality slightly but at a level that should be tolerable or even unnoticeable to the user.

If you were to release a HD movie using standard DVD encryption then movie pirates would have a near perfect quality copy of the movie. This near pefect movie could then be pressed and sold around the world and would rob the studio of profits. The more money a movie makes the more likely it will have a sequel or at least allow the people that produced the movie to go out and make another one.

So, copy-protection is important at some level to discourage. That level is of course up to much debate.

Copy-protection in it's basic form is a method simply used to discourage a user from making a casual copy of an item. As an example, DVD copy protection was broken years ago. As such, many DVDs end up on torrent sites and if you want a movie, you don't have to pay for it, but you run risk of violating law and being fined or doing jail time.

Most movies are below $30US at release and the price falls from that point on. After several years many movies can be had for less than $5. Even though DVD protection has been broken, movies are still sold every day and a large revenue stream is created from them.

The average user (that means NOT you if you are coming here) cannot copy a DVD without a level of effort. Therefore if they want to see a movie, they must rent or purchase it.

Audio CDs do not have copy protection, as such it is much easier for them to be copied. Audio CD sales have slid. Some argue that it's a lack of quality material being released, others argue that it's the lack of copy protection. Regardless, the lack of protection has made it much easier for people to copy audio cds and from casual observation on my part, less technical users are able to do it easily and they do share purchased content. One purchase as an example might be shared with three or four family members. However would the original content have been purchased by each person if they could not casually copy materials is open to debate. It would seem that there would at least be an increased chance that more content could have been sold.

Generally, legal content purchasers are older and have more disposable income. Content copiers tend to be younger, with less income, however more of it is considered disposable and they are a highly targeted demographic.

Note, this can also be broken down into nations and regions but this gets very political very quickly. China as an example has had an issue with piracy. Average income in China vs. average income in the US is an interesting comparison. There are many wealthy Chinease however there are huge numbers of poor which causes the statistic to skew dramatically. As such is seems understandable that a movie that sells initially for $30 US might sell for $1 in China. I'm going to use China as an example, but this should in now way imply any negative statement, it's simply an easy example.

With this in mind, content-industires (I'll say Hollywood from here forward as it is a good example). Want to discourage casual copying and would prefer to stop all copying.

A simple economic example:
Movie X gets made. The cost for it is $100 Million. In theatres the movie makes $100 Million so they have broken even. They have not lost or made money.

The movie was also copied from a theatrical showing. This copy is of lower quality than the film. Regardless within hours it is now in available for sale in China. Several versions are available. One which is done with subtitles and one has a native language voice over. The second one represent an added value and sells for more in order to pay for the workers in China who did the voice over. Hollywood in this case has made no money from this transaction and actually considers this a loss of potential revenue.

A period of time passes and the DVD version of the movie is released. This creates the first real profit for the movie. And now the studio's investment is beginning to earn money. At the same time, this DVD is copied an released in China. Again, there is a subtitle version and the native language version. The DVD represent a significant increase in quality over the original movie and sales will continue.

There are ways to combat this. This is where DRM comes in, but first the alternative method is to sell the movie in China on DVD with the hollywood version of native-language at the same time as the film first release. This would allow for legitimate sales. Hollywood is starting to do this. It has it's own issues, but it does help create a climate where piracy declines.


DRM::

So, the other way is to make copy protection that cannot be 'cracked' in the history of man, very few encryption systems have not been broken. All 'digital' encryptions systems are flawed in that brute force can over time yield the orginal content. The time taken to do this however can be extreme which would render the content no longer valuable. If a copyright on a work lasts 100 years, but it takes 101 years to break the encryption then the movie would have been free at this point anyway).

The other flaw is that content must be decrypted and the method for decrypting can be discovered. DVDs were cracked because the decryption system was easily visible in the Xing decoding software. Once the methodology was discovered multiple decryption keys were discovered (not present in the Xing software). This meant that it was no longer feasilble to encrypt in such a way that only the Xing key would no longer function.

As an HD-Movie is considered to be of very high quality there is an even greater concern over the potential for copying. This has given rise to DRM.

DRM means either Digital Rights Management or Digital Restriction Management depending on who you talk to. Your rights in the case of DRM are restricted to only playing certain content in certain devices in certain ways.

In order to enforce DRM the content must be protected typically via encryption to prevent you from having unrestricted access to the content.

DRM may specify that unless a playback device is certified end to end (media goes in, movie is displayed) it may only be viewable in a restricted form. That form can be reduced quality, both in the form of picture resolution and sound quality.

In order for unknown compontents to be certified, they must meet certain standards so that high quality end to end play back can be achieved. In the case of HD movies, the restriction is that in no way should the data be accessible in an unencrypted form until it's final display. In this case, on your HD viewing device (1080p monitor as an example).

This may mean that the chips on the motherboard cannot have exposed pins, or points of electrical contact that might allow a user to extract a digital signal. (This is an actual specification.) DVI as an example is not a valid connection as the un-encoded data is available for recording and the recorded data can be easily reconstructed back into an unencrypted video file. HDMI is also not valid as well for the same reasons.

HDCP basically (as I understand) continues to pass an encrypted signal to the display device. The device then decrypts the display in such a way that the signal is no longer reconstructbale, or that the effort of doing so far exceeds to value of doing it.


The issue the becomes what happens when the certified chain is broken. In some cases, this means that certain unencrypted output points are disabled. In other cases it may mean that quality of these outputs is severely degraded. If your 1080p display cannot accept HDCP input then your 1080p display may not have any display advantage over a similar sized 640x480 display.

Consumer reaction to this will most likely create a large level of dissatisfaction. Increased consumer disatisfaction can lead to a technology failing.The failing of a technology such as HD movies means a significant delay before users with higher end display systems can take advantage of those displays and that results in a general economic slow down in the industry which in turn stifles innovation.

The question then comes down to how much DRM is enough?

----
Next is about Vista and DRM. I don't really think MS is trying to kill PC gaming, I just think they are shooting themselves in the foot and the end result could be they do kill off PC gaming.

DRM, Fair Use, HDCP and Vista, Part 2

DRM, Fair Use, HDCP and Vista, Part 2

This started off as a series of posts over in the Over Clocking Union. I've put it all together here because I think it would be good to have it all in one place. This part is a little rant-ish and needs some editing. The second half addresses a few issues and questions that were raised from posts on the first part.



--- ranty part----

I encourage you to read:
http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt

I have not checked all the facts myself, but I believe that it's pretty much on the level. The DRM (digital restriction management NOT digital RIGHT management, as you don't have rights under DRM) clauses are all true. Statements about not letting a user tap an unencrypted pin on the circuit board are true.

You might have noticed that for some reason, the DX10 generation has had a little trouble. There is NO DX10 driver for nVidia, and they are not being very free with the beta. Also, a lot of stuff that nV said would work on their new card, actually doesn't yet. Like Sli-1080i output.

AMD/ATI has no DX10 part at all and the release is significantly delayed.

The earliest demos showing DX10 gaming have been (in my estimation) based on PRE-DRM parts and without the OS fully running in 'DRM' mode.

Vista has already been shown to use 20% more power and I cannot imagine thats from the 3d aspects of chip just being on. There's just not enough '3d' going on to make the chip draw that much more power.




So, why is MS destroying the PC gaming platform?
1. They can. If it really dies off, theres that whole XBox thing.
2. The PC is what allowed the DVD to be ripped off and stolen and they likely got some backlash from that.
3. They want to be "Hollywood's friend" so that they can get more content going through their VC-1 codec. (MS will make up to $1.00 dollar per device that uses that codec. Sorry, read that a year or 3 ago, no longer have the source, likely TV Techology Magazine)
4. Because they can. What, will you switch to a PS3 for your gaming? They know a good chunk will pick up the xbox as the games are similar. There is likely an internal report saying that any gamers lost have a 50% chance of ending up on xbox. And not all gamers will jump at once, but some will, with more after the fact.


Here's the issue though. Vista with DRM is unsuitable for critical business use. The linked article shows it is unsafe to use in the medical community without dramatic disclosure of DRM causing events.

It's unsuitable to be used by Government as an outside source could 'disable' all machine quickly via DRM Deactivation.

It's unsuitable for 'basic' critical business server use because your business could be turned off by the whim of the OS without warning. It's one thing to have a server fail because the CPU kicked the bucket, it's another thing to fail because the OS says you're no longer certified to run it.



The thing we'll have to watch for is to see just how much slower Vista is on basic tasks vs XP. If it's a lot slower, how do you justify moving to it? The security model looks better, and there is significant value to that. DX10 is clearly better than DX9.

But no biz cares about DX10. They care about user retraining and the cost of running the PC on a day to day basis.

What the heck is MS thinking?

--- end ranty part --- the follow up to it is below.

My first post got a little ranty, and some of it is a little misunderstood (or more likely mis-written) so let me touch up a few item.

First of all. MS does care about gaming and MS does NOT care about gaming.

Remember a few years back when the xbox came out and MS sort of just plain stopped making games for the PC. Oh sure, MS is a small company so they needed to focus (just like you only have a few cells in your body, so when you walk you need to focus so you can't chew gum at the same time).

MS realized that they were losing a core group that helps drive PC innovation, so they released the xbox 360 and they made a branding change for PC gaming. The 360 I think runs a power pc chip, so it's a fair bit more work to swap from the 360 to the PC and back again. They want a standard gaming controller and they want the PC to adopt it. Personally, I don't want a standard gaming controller which is why I have two joystick, and a steering wheel, plus a mouse and keyboard. I don't want to give up my mouse in a shooter. I don't see much on the console side that gives me that.

But remember, PC gaming sales are pretty small combined to plain OS sales. It's a market and MS will milk it for all its worth, but lets face it, there's more retail space devoted to consoles than there are to PC games. Thats due to sales and sales alone. PC gaming with some wonderful exceptions has devolved in some areas to being shovelware console ports.

And all that is beside the point--- the real point though is that it's a market that can be exploited, either for cash or abused and killed. I do not believe MS -wants- to kill PC gaming. I believe MS is killing it out of stupidity.

I do hold some hope out here, I've heard of some DX wrappers for OpenGL which means that gaming can move forward with our without MS. But that will comes down to lawsuits. We'll see there.


The real point though is that the included DRM is screwing up everyone. ATI is late to the game. The ORIGINAL release date has long since passed. The latest date is pushed back a little more. Thats not all Vista of course. Between job cuts, the merger, transistion to 80nm, and a new OS it's a bad combo for releasing a new graphics card. Nvidia may have rushed it's G80 series to market thinking it had to meet ATI's timeline or maybe they just wanted some Christmas dollars. But if they had more time to get things done, I'd bet for the most part they'd prefer to take it. Who wants to be rushed? In this business you are, sometimes to ill effect.

Broken things. You might get totally lucky with Vista, your setup may not conflict at all with Vista. You might not be doing anything that would stretch it. But when you do start to. There are a lot of things you can start to run into. When you run into your screen going black because you don't have X Y or Z item in place, it's a problem.

As an example, a DRM we all know and hate, remember that Star-whatever DRM application that would slowly wreck your CD/DVD drive? It was DRM that got in the way and caused your machine to degrade over time. It was a serious issue that happened to a few people (thousands? Hundred? tens of thousands?) but it was annoying enough that it caused an uproar and companies dropped it rather than faced lost sales.

MS faced industry (Hollywood in this case) pressure for something it couldn't control. DVD's got cracked on the PC and PC was ever-forward used to pirate movies. MS wants to be in the entertainment business. They make set top boxes, video compression systems, they have a cable channel, media portal, music player, all sorts of NON-PC related things.

People buy more TVs than PCs. What do you have more of? You being a reader here might break even, might even have more PCs than TVs but the average person has more TVs than PCs and it's by a lot. If you can't afford a PC you likely have a tv, if you can't afford food, you likely still have a tv. It's a bigger market, and it makes the PC and PC gaming market look like squat.

For MS to become a player in that market, they have to be a friend to that market. To be a friend to that market, they can't have their platform being the system people are using to break it.

But DRM is a cat and mouse game (look at the sony PSP and firmware situation) that is beyond MS's ability to control. They will try of course, that's the whole of humanity. We try to manage the unmanageable and sometimes to great effect. But for everyone who is trying to lock this content down, there's someone else trying to open it up.

So, do you lock a system down so severely that it affects the usability of that system? MS appears to have said yes. The 20% power figure (I believe comes from Gartner, but I might be wrong on that, this is a free form opinion item and not a news article, so you get this as it comes out of my head with little to no editing after the fact) and 15% slower figure mentioned above are example that the new OS is heavier. If the only major change is the presentation layer then it doesn't make sense. There is no reason for the OS to work that much harder just to give me a mostly static 3d display. But if that mostly static 3d display has to run from encrypted memory pages and encrypt and decrypt on the fly, that would make a lot more sense.


So, is it all a bunch of crud or is something really going on here?

Windows went RTM and MOST of that means that there is a locked version of Vista for the larger builders to work from. They can find a working base line config for their systems, order a bunch of parts, and stick Vista on them. In this case it's NOT like a game going GOLD. There MAY be an issue of pressing CDs, but the turn around of a CD is actually pretty fast and can happen in a matter of hours if you are really willing to pay that much for it. (My company presses a new limited audio cd run ever month, it cheap and fast even for 5000 disc orders.)

The thing MS has to do is make sure that HP's and Dell's out there can make a PC and get it sold with their OS and as few tech support calls on both sides as possible. However, MS is under a gun in this case. Vista was long delayed and in order to meet goals, they chopped out a lot of stuff (WinFS among lots of others) and it would not be surprising to find MS rushed a product out the door knowing they could patch it later. No plan survives contact with the enemy, and users hit software way different than programmers do. Not to mention the diversity of platforms it has to work on. Crud always comes up. It's just the nature of programming.

DX9 is a known platform. Making a Vista PC with a DX9 card is easier than making one work with a DX10 card. nV likely has a working dx10 driver that does a lot of what it's supposed to do. The Crysis guys have likely played with it. MS has likely used it as well. But it's not out in the general public. nV is pulled a 3d realms. It'll be out there when it's out there. Most Vista users seem to be running Vista in VGA mode on the G80. THAT is an odd thing.

1080i in sli doesn't work. I agree, I don't care about that either as my only 1080i TV isn't used for gaming and I don't have SLI at home anyway. But that MIGHT mean something. IF video data has to be moved encrypted on any point that could be viewed by something that should not view it (hardware/software/santa clause) then the SLI bus might have a real issue getting data across it. It might not.

So what do we know, here's a few items:
DRM is in Vista.
DRM does take a toll on performance, how much is debatable.
DRM has caused product launches to fail (the original divx-dvd, DAT)
HD-DVD and BluRay have HEAVY DRM
Vista uses more power and cpu time than XP.
Vista needs more memory than XP.
nVidia seems to be having a difficult launch with Vista and Dx10
ATI is late on a card
MS is late on Vista
MS has a LOT of irons in the fire

Whats it all mean?
That's the question, and the concern. If you really REALLY screw up a platform, you can kill it.

If the DRM and other 'junk' in Vista means you need a higher end rig to play the same game in Vista and more and more people move to Vista then one of several things happens:

Games get 'lighter' to allow more users to play games at a tolerable level.
The PC gaming market shrinks because less people are willing to pay to play.

If games get lighter, we'll all live. But we want games to be more 'real' from physics to AI to graphics.

If it gets more expensive, then you may find there are even less PC gamers which means there is less monitary motive to make a game because the consumer base becomes even smaller. If that's the case, you'll see even fewer PC gaming choices and that means more avid gamers are likely to turn to consoles for their gaming choice.

The future of computing

These are originally posted over in the OCU. These are some thoughts that I've had brewing. It's a little wandering, and hasn't been edited much so... be warned. ps. Sorry about the formatting, it's a game spot thing...

AM2 is what you have now,

AM2+ includes: HyperTransport 3.0.

I don't know pin counts or anything else, but HT3.0 is for sure. Interestingly enough, with the drop of DDR3, HT3.0 might be the only difference.

HT3.0 runs at 2.6GHz and has a memory bandwidth of over 40GB/second. Just perfect for those multi-core adventures. This represents nearly a doubling of the memory bandwidth over HT1.0 (the current standard).

Why is that important? Keep watching...


Next step in my thinking...

When intel's dual cores came out it was said that a dual core was 90% faster than a single core alone. Now that intel's quad cores are coming out, it's said that they are 70% faster than a dual core alone.

So, 10% loss of performance when we add a second core, and now 30% loss of performance when we have three additional cores. This means that when the Octocore comes out (and it will, it's already on the board even if it's not being talked about yet, and I'm sure AMD is sweating over it's own too) under the current trend it will only be 30% faster than a single quad core. That's a 70% loss of performance.

Why the performance loss? The cores are getting starved. There's not enough bandwidth to memory to keep the cores fed. If you have enough cache on a chip that your app or function can fit inside a core with little need to move to the outside system, it's going to run at full speed and it's going to run dang fast. BUT as more and more cores all try and talk on that tiny little FSB they have greater and greater contention and finally begin to spend more and more time running idle while waiting for data across the FSB.

The above assumes that the loss is linear and not exponential. I believe the loss is non-linear but it's being off set by raising the FSB speed. Intel's been raising the bus speed already, and will continue to do so. More GHz on the bus means more heat and more power used. We've seen that result already, eventually you hit a wall or you have to go more and more exotic with your cooling. Worse, intel's already show that it doesn't care about the entire system's power use, just the cpu. It can keep power down at the chip level, yet still raise power use across at the socket. It's an easy marketing message to sell.

So intel continues to raise the bus speed, but another problem crops up. DDR2-800 memory is hard to make. A recent article says only 5% of chips are DDR2-800 capable. Therefore, even fewer will be DDR2-1066 read, and even less will hit DDR2-1333. The cost becomes less and less acceptable.

So something has to change. Dual channel DDR gave a nice bump in speed, so Quad Channel will have to come along. Yes, you'll have to buy memory in matched sets of 4.
But in the end, that's not going to be enough, they're going to need to do something smarter, so that means a more radical change to the FSB, and likely a replacement for something more dynamic.
And still there's more....

Intel reverted to Pentium 3 / Pentium M to make Core 2. So the new architecture here is really old architecture re-released with modern production values. It worked great. Had intel never made the P4 detour AMD and intel would be much farther along than where they are today.
But intel's every 2 year note isn't revolutionary at all. Every 2 years we have a die shrink and all the cores built together rather than slapped together is great. However at 20nm you start getting into some odd laws of physics. Portions of the core may no longer be shrinkable. Other portions can go lower.
Someone just showed a 1nm transistor. But at this point, you're now hard against the laws of physics. You reach the end of current tech and are forced to go quantum computing. There's no more left to go. That's fine, we're talking 10 years out there.

But intel back to the main issue, the data bus. HT or FSB it's what you live and die one.

Look at nVidia. Their databus just went wide. 384MB cards. ATI and nVidia have parralle down cold. So the GPU lives and dies by the rate at which data can be sent to them. Go to low and the GPU waits.

intel is already hitting this in the current generation and they have not touched the FSB in a long time. The last change was dual channel. THAT was a long time ago.
On intel, you have the chip talking to the memory controller across a dedicated outboard bus. As I recall it's a one way bus, which means you have to divide reads and writes across (soon) four cores. If your bus is running at 1GHz and your cores all reading and writing on each instruction then you really have each core running 128MHz. (128 X 2 (for read and write) x 4 (for cores)) . This isn't a perfect example, but it helps to illustrate the point.
THAT sounds pretty crappy, and it is, but it's not a real world case. Lots of stuff happens in the cache, some instructions take more than one cycle to execute, some data is written back to the cache to be uses again. Not everything has to go the outside world. But sooner or later stuff does, and now you run into the bottle neck issue.

So even if you move the FSB rate to 2Ghz you can see, it's only giving you the effect of 256MHz at a per cycle. So you need to get very wide and run very fast as you add cores, and each core you add causes more and more pain.

You can get as small and as fast as you want but until they solve this pain point you are still running into a Pentium 4 class design flaw.

Another case is AMD's 16 way systems (8 dual core systems). These can be made to run at all cores at near 100% but its only in benchmark type apps, in real world, these start to starve pretty quickly. AMD without question has the better bus interface, and even there the cores starve.

Something has to be done.

I've seen statements on pricing that says dual AMD FX dual cores will run in the $800 to $900 range for a pair of matched CPUs.

That's pretty interesting because it puts heavy price pressure on intel. If you look at the latest CPU performance graph over at Tom's and if the Core 2 Quad is a 70% increase over the Core 2 duo then you're looking at potnetially equal performance at equal price. The difference really being that you'll be able to upgrade to dual quads on the AMD but you'll likely be limited to only a faster single quad on intel's side.

It's WAY too early to say what that will do in the market and if any of what is being said by anyone is actually true, but it's one more interesting part of the puzzle.

It is as quad as the upcoming Core 2 pseudo-quad.

The HT system allows the processors to talk to each other directly in the same way the intel's two core2 duo's in one socket does. The only difference being the distance.

In addition, because each AMD cpu has it's own memory controller (I'm not sure if it controller per core, or controller per chip, can't remember and don't want to say it wrong) they have direct access to their own memory plus the other's memory.

This is why on an AMD multi CPU board you'll see multiple banks of ram. Some banks are for one processor, some banks are for the other. It greatly reduces memory contention across the board. So even though the cores will have less performance than the core 2 cores do, the lack of bandwidth restricitons may still mean greater performance overall.

Of course, that comes down to what is being run. The slowest intel or amd chip can beat the pants of the fastest amd or intel chip if you run the right benchmark. A single benchmark is meaningless. It all comes down to real world application, and there is a big difference between a gamers use of a chip and data bases use of a chip and web servers use of a chip and statisticians use of a chip.


Now, back to AMD/ATI and intel:

AMD wants on board graphics. They have chips that do onboard graphics, they are better (without question) than intel's onboard graphics. AND in theory, if they work it right, they can become co-processors to the AMD cpu.

ALSO, AMD has HyperTransport (soon 3.0). This means connecting FPU 'cores' (aka the graphics card) to CPU cores becomes a walk in the park.

Intel is already thinking this way, they showed off their prototype 80 'core' cpu, which was a bunch of FPU units hooked up to a few full cores. AMD in theory then could have the same thing in regular production in the coming year through early 2008.

Think about the concept here. An on board GPU would raise the price of a board by $5 or so. No big deal, but it raises the performance of the board well beyond what $200 extra in cpu power would. Now thinking about a more powerful gpu being dropped on to the board. Suddenly things are start look different.

Now, what if you expand what the GPU does. Borrow some GPU ideas for the AMD chip and some CPU ideas for the GPU and now you have a really interesting mess that could provide some very parallel performance.

But wait, there's more.

AMD has a new slot, basically it's a HT3.0 slot that lets you drop in anything and it has full access to the system like the CPU does. Take one of the G/C PU type chips put it in a card and you have massive co-processor.

Use the slot to bridge two computers and you have a double scale computer that shares all tasks and resources.


The whole point of these last few posts is that THINGS are changing fast, and the direction AMD is taking may not be what we've all been thinking. There may be a different plan here. A different idea on how computing could be done. It might be something that's been evolving over time or it might be something that they stumbled on.

In the end, what a 'cpu' is and how processing is done may look very different in the near term. Its worth paying attention to. It's worth looking past the marketing to see what really is going on. Intel and AMD have plans far beyond the the add a core and make it faster and smaller.

Intel is being intel, it seems to be sticking more and more into each chip. Huge caches of ram to overcome a bad bus design and all the processing power stuck into one chip.

AMD is building bigger chips too, but they don't have to use as much cache ram. This means smaller dies, more chips per wafer and higher yields because there are less gates to fail on a per chip basis. BUT they are building systems with idea that processing can happen outside the chip (e.g. the HT3.0 slot).

It's going to be very interesting indeed.

Latest Build: The Xooma X2O Machine.

Another day, another build! This time it's the Xooma X2O case. My first "theme" case.

The customer had already selected the case they wanted and they wanted it to have a Xooma green look to it. (You can find Xooma via Google if you're curious, the race car video is the best way to see their look.) I had a pretty limited budget and I was locked into the case and the power supply that came with it. By the way, has anyone ever heard of TurboLink? Yeah, me either. But thats what we got, and it seems to be an okay power supply. It's a little light on the 12v rail but what are you going to do? The Aspire case came packaged well, and like so many cheap cases, with very limited instructions. I'll be repacking the case the same way for shipping to the customer. By the way, a good tip, most motherboard come with a soft foam piece under them. I save these and use them during my builds to prevent the case from being scratched, you can see several in the finished case pic below.

This machine is NOT going to be overclocked. It's going to run Xp Home and Office 2003 and that's pretty much it. Still, I hope they'll do some gaming some day, so I put a GeForce 7600GS in it. It's a good card and it was very affordable at time of purchase. Plus it uses less electricity than the previous generation and that means less heat. As I picked quiet low rpm fans I really needed to watch my heat build up. The 7600GS should match this design pretty well.

I like active cooling on hard drives so the first step was to get the case down to bare metal (after testing all the fans on another rig to make sure they worked). The Thermaltake fans are pretty quiet and the color on them is great. Mounting was a little annoying. This case mostly has folded metal edges but the drive bays will get you. No question about it. A few scrapes later and the two front fans are in. The top fan comes along pretty quickly after that.

I've never used green LED fans, but I have to say these look really good. Next up I put the risers in for the mother board and then install the CPU and memory. This is my first time using Patriot memory. So today's learning experience is exploring a new company. The memory looks good and it's easy to grab because of the heat spreaders. The 2-3-2-5 timings on the memory should give this machine a little extra speed. Nothing earthshaking but a little here and there helps. With 2GB of memory, this machine is likely set for life.

The case has just enough room by the drive bays to hide a lot of wiring, that's a good thing because each fan come with A LOT of wiring. This will all be run and tied down soon. For now, it's just a matter of getting the cables to the right location.

So now the optical drives and hard drive go in. This case has colored Rom covers. This means the drives have to sit all the way to the back of the mount AND it means we need screws on both sides of the drive. This is really a big pain because now the case front doesn't really fit so well. I could remove the black face plates on the Nec-3550a drives but that would look a little scary to the customer so instead I do what happens all to often with a cheap case, I force it. It works, but it's not how I like to do things.

With everything in place it's time for first real test boot. Everything comes up and it's looking pretty good. I shut it down and clean up the cables. All in all it looks pretty good. The OS goes on without issue as does Office 2003 Basic and there you have it. A quick generally uneventful build. It's funny, there just wasn't much to this build. It's in, it looks good, and it should meet the customer's expectations.

Below are some images of the final result. I am pretty happy with it. It's got a good look to it. It should look good in the dark and look good in full light as well. It's quiet and I haven't turned on cool and quite yet which would make it even more quiet. So, in the end, I'm happy.









The Photo Workstation

One of the key things with doing any build is that you should learn something while doing it. For this build I needed a non-gaming machine that could handle tons of storage. It's being used for digital photography, so the keys are storage, dvd-writing, storage, card reading, storage, picture processing, and storage. And did I mention storage?

Empty drive case with 120mm fan showing.I wanted a case that could handle a lot of hard drives and that would allow me to place fans as needed to keep them cool. I selected Spire's SwordFin SP-9007B. It's got six 3.5" internal hard drive bays, plus two external, plus four 5.25" external bays. Six hard drives should be enough, eight should always be enough, but long term, knowing my inventory on hand, It very well come could have 16 drives in it. Which means I'll have to find a way to mount another 4 drives. But that's not for today.

Today we're only going for a single terabytes across 4 drives. Ideally this would be on a raid 5, but for now, it's a JBOD (just a bunch of drives). Down the road this will end up moving to something like four 400 gig drives, but for today I think we're okay.

Normally on a build, I'll assemble everything on the motherboard first, and test it all out before the mobo goes into the case. I did that here, and did so well before I had a camera and thought to take pictures. On this build the power supply and case arrived a week after everything else. So I borrowed a power supply from my Mythbox (watch for that rebuild to come soon) and hooked everything up. Of course, no power switch for the motherboard, but if you know what two pins short together to turn the machine on and off, a handy screw driver will work just as well. Note, you don't have to worry about 'bounce' on this as it's a mechanical switch, and 'bounce' is a part of it. 'Bounce' is when you make any electrical connection, it's not simply a switch from on to off, there is a brief moment where the connection is made and broken many times until it settles down. Because of this the power supply / motherboard are smart enough to not read the little millisecond contact times when the switch is coming together. Otherwise your machine would turn on and off a dozen times. What the system does instead is say, has this connection been stable for .x seconds? If so then turn on or off.

So, I wanted to learn something on the build, in this case, I'm going to for two main things, has MSI made it as a company, are they someone I want to use as a source for parts, and what is the new AMD Stable Business Platform really like?

Let's call it BP for short. The BP is AMD (and via chipset partner nVidia) answer to getting AMD onto the corporate desktop. You get 15 months of hardware stability. The drivers will be updated on regular periods and the hardware wont be abandoned for at least 15 months. This means you can build 10 one month, 10 next year and know that all 20 will be alike. You wont have to worry about this driver for this revision, and that driver for that one. Dell and others use this to sell into corporate cultures where total cost of ownership is more important than initial price. In this case TCO goes way up if you have to support dozens of different configurations.

In my case, I don't have to worry about it. It's one machine, but it's nice to get an idea of whats going on, on the business side of things. (In my real life, this combo might be rolled out onto 20 customer service machines, 10 shipping work stations, and 10 corporate office machines in my company. I also got it for about half the price of retail, which is the best reason of all to try it out!)

It's custom cause you customize it yourself.Let's answer the MSI question first. Are they ready for prime time? A picture can answer this for you. See this custom port cover plate? It's custom because you have to remove the metal covers for the parts you do or don't have with a pair of tin-snips. Even ECS, a budget brand doesn't do this. So, right off the bat, thats a massive strike against them.

Conventions are for squids!Next up are the ram slots. You know the drill, you match colors. If you have two blue sockets and two black ones, you put both your sticks in either blue or both your sticks in black. When I did this with the MSI board it booted just fine, ran just fine, and was happilly running at DDR333 instead of DDR400 and totally wrong memory timings, no matter what I tried, thats what happened. Moving the the one stick from the second cyan slot into the first purple slot solved everything. The manual confirmed it as well. So much for going with convetions.

They didn't tin the copper.Another thing that bugged me, but really isn't an issue is the motherboard itself. Here you can see that they didn't 'tin' the mounting holes. I don't think this is really an issue, but it looks bad. Again, comparing MSI to ECS, I haven't seen ECS do this. If you look at the picture you can see that the copper is already starting to get discolored. I don't know if tarnish is an issue for the board or not, certainly if you were concerened about making a good grounding connection here there might be an issue over time. I'm actually more concerned that the red protective layer over the traces will start to cover off. Of course it doesn't have far to go, the dark area around the hole doesn't have any copper on it. It's only the light areas that actually have a gold or copper trace. On this mother board, it's copper.

Not the best heat sink for a chipset.And the last thing, you have to love the chipset heat-sink. The notch is there because it's partly under one of the expansion slots. So, you'll get less than ideal heat dissipation here. Worse, look at the CPU heat-sink next to it. The CPU's fan wont blow any air across this so it's a passive heat-sink with marginal airflow at best. So, although the board supports overclocking, the HT is going to be challenged right of the bat with this design choice. The only plus side here is that the fins are arranged for best air flow when the board is mounted vertically.

It's not all bad though, this board does one thing I truly love. There is ALMOST a standard for connection things like hard drive LED's, the internal speaker, and various switches. Most of them are two pins connections and if it doesn't work one way, you can usually swap it for the other way. (Hint, white or black are usually negative, the colored line is positive.) But what MSI does it provide TWO different ports for connecting some of the odd ones. Some cases use three pin connectors for some connections and other cases use two. This can be a big pain. If your case has a two pin connector, it goes on the primary block, if it has a three pin connector, it goes on the alt block. THAT little option alone nearly brings MSI back to the 'buy it again' level. If it wasn't for the port cover issue, they'd be at the buy-it-again level without question.

Is this board ready to update to AM2?One other interesting thing about the MSI board is if you look at where the CPU connects. It's a little hard to see, but if you look, you can see that the board is laid out for an AM2 socket. They've reserved space for the four screw CPU socket, right down to marking where the holes will be. I suspect you'll see this board in an AM2 configuration very soon.

On to the case! It's plastic, I knew this going in, but the hinge on the case is pretty iffy. It's actually a double hinge. The nice thing is you can fold the door fully out of the way. The down side is I don't know how long the front door on the case will stay with the case. The side panels and locks are a big pain in the rear. I actually had to take one side panel off to get the lock to unlock on the main side panel. Normally I say, don't use the keys on a computer lock ever, and this reinforces that believe. Don't use the keys on this case ever!

There's a lot of room around the power supply.This is a big case, lots of room in it. Some of it is a little wasted like all the space above the power supply, but that just means there is room for modding and other such. As an example, all that 'wasted' space above the power supply is just perfect for renting out to a newly married couple who need a cheap place to live. However as power supplies get bigger, that extra space may very well be needed, and thanks to the large power supply mounting plate, you should be able to customize that to work with the new 1000watt and redundant power supplies coming on the market. There is actually enough space above the power supply here to fit a cd-rom drive. So if you need more room for hard drives, this would work. You could use this for a water cooling solution but something about putting any liquid over a power supply seems like a bad idea.

120mm fan mounting box.It's a nice case and Spire will likely become a popular second tier designer. The case itself was under $60. It's got good mounting for fans, the grills on the case are noisy, but those will get cut off anyway. It's all about sound, right? The back fan's mounting box will let you mount the fan on the outside of the case as easily as on the inside, so that grill really will get cut away. All the connectors are labeled and the lack of documentation really doesn't hurt if you've done this before, if you haven't.... you can likely still figure it out if you were able to get through elementary school in less than 8 years.

Papa's got a brand new bag!Even the the accessory bag is nice, re-sealable. They plan for you to keep it. You'll have to too, because those custom drive rails are the only way to mount a hard drive. Speaking of mounting drives, the 5.25" slots allow you to mount a drive without having to screw it down, although you can if you use a washer. The trick here is not over-tighten the screws, if you leave it a little loose they slide right in. As for the accessory bag, well... you do have a ton of room above that power supply.

Mounting the fans was interesting, initially I mounted the back 120mm can backwards. I didn't catch it until everything was booted and I was running temp tests on the hard drives. It turns out that may be a better solution. The system has a lot of venting on it, and mounting the fan wrong caused extra air to blow across the hard drives and even more air to blow out through the power supply fan. I'm not sure, but I may reinstall the fan this way. It should push more air away from the CPU as well. If thats the case, it may work better to reverse the front hard drive 120mm fan as well. This will need a little follow up testing.

The power supply sits on the rails.So we have the fans in, the power supply is mounted, note that this power supply sits right on the rails, if you have an add sized power supply, you may have to do a little playing with that mounting plate to get it in right, but the design is flexible, so you shouldn't have much of an issue. In truth, resting on the rails made the PS very easy to install.




Now that the drives are in, it's no longer a pretty wiring job, but thats okay, I still have one more drive to mount in but I can't do that for a week, so there's no point in locking everything down yet. Note that you've got a ton of room here. You can use plain SATA cables, you don't need the 90' angle type. There is lots of room here. My only real complaint is that the hard drive are a little close together, the fan helps but it only really gets the middle drives, the top and bottom most drives don't really get as much airflow as they should. That backwards mounted fan in the back however seemed to have covered that issue. So that may end up being a longer term solutions. Also, there are some 130mm and 140mm fans that will fit in the same mounting as a 120mm fan. If the width isn't too bad, that could be a solution as well. However, there is enough room on the other side of the hard drives that fans could mounted there if it was needed. This really is a big case.

I mean really! This IS a big case!Just look at the space around the motherboard, granted, it's not a full ATX sized board, but you can see, there's just a LOT of room in this thing. You can see why I'm not worried about finding ways to mount other hard drives down the road!

Okay, everything is mounted and it's time to power on, note there is a connector near the front fan with a warning label to make sure it's connected properly before powering on for the first time. I've connected everything correctly, so lets turn it on.

Nothing.

UPS plays soccer with my package.I've got flashing lights on the network port, so I seem to have power, maybe that ultra critical connector in front? No, everything right, time to start removing power connectors, still dead. Hmmmm... lets look at the power supply box and see if I missed something, oh there it is! UPS seems to have used this box for nice game of soccer. A quick power supply swap confirms it, the PS is dead, likely a fuse for the 12v line. Dang.

NewEgg calls UPS and a new power supply / soccer ball is on the way, I'll just have to use my cheapo for now. The cheapo is okay, but seems to run a little hot on this motherboard, nothing is out of spec, but some of them are at the high side. I'll have to swap power supplies when the new one comes in on Monday, maybe I can get that other hard drive at the same time and be done with it.

So what about the ultra critical power connection on the front. It seems there is a special button on the front of the case, and when you press this button, a red light comes on, when you press the button again, it goes out. That's it. Really, that is all it does. Dang.

Computer cases can make handy tables.PS. What do you do with your left over parts? Put them to good use!

  • 19 results
  • 1
  • 2