Reply
 
LinkBack Thread Tools Display Modes
  #1   Report Post  
Old January 10th 07, 07:57 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Sep 2003
Posts: 268
Default 64-bit chess engines

So, I see that Deep Shredder 9 and 10 are available as 64-bit UCI engines [and
chessbase may be packaging a 64-bit version in Chessbase protocol as well].
Has anybody tried these and has there been any notable performance [i.e. ELO]
increase?

Thanks in advance.

--
Thomas T. Veldhouse
Key Fingerprint: D281 77A5 63EE 82C5 5E68 00E4 7868 0ADC 4EFB 39F0


  #2   Report Post  
Old January 11th 07, 06:51 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Dec 2006
Posts: 10
Default 64-bit chess engines

Thomas T. Veldhouse wrote:
So, I see that Deep Shredder 9 and 10 are available as 64-bit UCI engines [and
chessbase may be packaging a 64-bit version in Chessbase protocol as well].
Has anybody tried these and has there been any notable performance [i.e. ELO]
increase?

Thanks in advance.

64 bittedness isn't anything special in and of itself. But in
combination with the right amount of cache and enough stacks, then you
should be able to get a significant search boost at a given processor
speed.

You may also need to write some custom in-line assembler because the
current compilers may not be "chessie" enough to take advantage.

But ultimately, something like Shredder may not take the full advantage.
We may very well be in the middle of the "last ply" problem. In that
there are no real improvements available (either in hardware or
software) to the top programs to generate enough searches to get to
another ply (hydra and deep blue as exceptions). But that there is a
lot of headroom that can be utilized for smarter searches (which is
where Rybka seems to be excelling).

So, I think there is a new race underway. It used to be he who thought
deepest beat he who thought best. But now that everyone is the same
depth, and you just can't see deeper, it will again become a contest of
he who thinks best. It's a fun time for engine to engine matches.
  #3   Report Post  
Old January 11th 07, 07:18 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Jul 2005
Posts: 178
Default 64-bit chess engines


"Thomas T. Veldhouse" schreef in bericht
...
So, I see that Deep Shredder 9 and 10 are available as 64-bit UCI engines
[and
chessbase may be packaging a 64-bit version in Chessbase protocol as
well].
Has anybody tried these and has there been any notable performance [i.e.
ELO]
increase?

Thanks in advance.


Chessbase has not yet released a 64-bit version of any of their engines,
Fritz and Junior.
Engines like HIARCS and Shredder also released a seperate, 64-bit version.
Several of the marketed engines have 64-bit versions, and also run on
Chessbase software.
For instance, Rybka gets about 50 to 60 % increase in search-power with the
64-bit version.
In engine-games of 10 minutes or so, this means that an increase of
searchdepth of maybe 2 or 3 ply, a total of 12 ply -- 14 ply, in a regular
middlegame.
This is not the same 60% increase, logically, because of search-trees/
increasing deviations.
In ELO, this meant an increase of about 30 to 50 points.


  #4   Report Post  
Old January 11th 07, 08:55 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Sep 2003
Posts: 268
Default 64-bit chess engines

Johnny T wrote:
64 bittedness isn't anything special in and of itself. But in
combination with the right amount of cache and enough stacks, then you
should be able to get a significant search boost at a given processor
speed.


So, doesn't bitboard manipulation significantly faster on 64-bit processors
due to the fact that a bitboard is a 64-bit word? Certainly that should
increase performance by a factor of two for such manipulations.

You may also need to write some custom in-line assembler because the
current compilers may not be "chessie" enough to take advantage.

But ultimately, something like Shredder may not take the full advantage.
We may very well be in the middle of the "last ply" problem. In that
there are no real improvements available (either in hardware or
software) to the top programs to generate enough searches to get to
another ply (hydra and deep blue as exceptions). But that there is a
lot of headroom that can be utilized for smarter searches (which is
where Rybka seems to be excelling).

So, I think there is a new race underway. It used to be he who thought
deepest beat he who thought best. But now that everyone is the same
depth, and you just can't see deeper, it will again become a contest of
he who thinks best. It's a fun time for engine to engine matches.


So do you have experience with the 64-bit engines available to indicate
whether they perform better? Or not?

--
Thomas T. Veldhouse
Key Fingerprint: D281 77A5 63EE 82C5 5E68 00E4 7868 0ADC 4EFB 39F0


  #5   Report Post  
Old January 11th 07, 11:22 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Dec 2006
Posts: 10
Default 64-bit chess engines

Thomas T. Veldhouse wrote:


So, doesn't bitboard manipulation significantly faster on 64-bit processors
due to the fact that a bitboard is a 64-bit word? Certainly that should
increase performance by a factor of two for such manipulations.


Often, no, because the compilers don't do anything special enough for
you here. It is more a question of cache size, speed to the stacks,
qty of stacks, and of course speed of the processor.

Yes, given a good "chessie" architecture you may see a speed advantage,
but we are WAY less than 2x away from the next ply. So 2x gives you,
essentially, nothing. It can improve speed chess, but essentially you
are just lowering and lowering the amount of time that it takes to get
to xply.

Unfortunately it is not going to improve the quality of either normal
controls, or overnight analysis. Because you are still too far away
that 2x isn't going to make such a difference. Nor 4x, like with the
transmeta processors. And other unix processors.

Hydra and deep blue work/worked well because they were 2+ MAGNITUDES
deeper, not 2x deeper. And today computers and engines are essentially
all at the same level, about 8 moves, 16 ply (i think those are the
right numbers, we may actually be slightly deeper than that, but it
doesn't matter, because the limit is there whatever the actual number
is). That works out to about world class good move level, and no real
"mistakes". And every ply added more points and deeper (faster) was
better than smarter. But each ply is EXPONENTIALLY farther away.
Computers are not getting exponentially faster anytime soon, and for
awhile there was bunches and bucket loads of programming cleverness that
wrung every cycle of the processor in the quest for deepness. This was
hugely important while computers were slow, and it is the current state
of the art.

These discoveries defined chess engines for awhile. And ELO was to be
gained by furthering the horizon. More so than being smart. As a
matter of fact if smart dropped you a ply, you would lose more ELO than
you would gain. It may not have had to been that way, but programmers
were probably better programmers than chess players, and could provide
better code than smarts.

And then the stagnation started. About fritz 9. Computers engines
are world class. Faster processors have help make all the engines as
good as they can be. There aren't any more tricks to get to that next
horizon level. It may take 10 years to get there, but it looks like
moore's law is actually maxing out as well.

But, we are starting to gain headroom towards that next ply. And it is
further to the next ply than all the previous plys before it PUT
TOGETHER. There is a lot of headroom available and it is growing.

Then comes along a really good engine that leaves it's source code to be
found (better than crafty), I think it was fruit, but I don't remember
now. But it screamed. Commercial quality screaming. This let a
programmer, who had not dedicated his life to the tricks, but who is
ALSO an extremely good chess player, to add chess knowledge. And it
works, smarts adds ELO, lots of ELO. It hadn't worked for years, but it
really works now, because of the rest of the environment falling into place.

And now, there is a new target to program against. Something beyond
Kasparov and Kramnik. Sort of world class plus. And that is Rybka. It
is just first, I have no doubt he will not be last, nor do I think it
will necessarily win this next race. But it has helped define it. I
think that there are a lot of smarts that there is going to be room for
in the new computers. And it will be the quality of those smarts, that
go from world-class no mistakes, to world-class plus chess players.

So do you have experience with the 64-bit engines available to indicate
whether they perform better? Or not?


Yes, when the 64 bit architectures, and the Transmeta long word
processors came about I worked with some friends of mine modeling how a
chess program may improve in these. And we came to all of the above
conclusions, way prior to Rybka. I was not privileged to all of the
chess base knowledge to even come close to overcoming their advantages
in programming, and it was clear that the merely long words was going to
be enough to just get 2x programming speed (for a variety of reasons),
or that even given all the tricks, and 2x improvement, that at the end
of the day it would matter.

It was fun to model, but at the end of the day, not worth doing. We had
neither the deep programming experience, or the deep chess experience to
do provide anything useful here. But it doesn't mean, that I don't
understand the space. I do. Very well.


  #6   Report Post  
Old January 12th 07, 01:59 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Sep 2003
Posts: 268
Default 64-bit chess engines

Johnny T wrote:
Thomas T. Veldhouse wrote:


So, doesn't bitboard manipulation significantly faster on 64-bit processors
due to the fact that a bitboard is a 64-bit word? Certainly that should
increase performance by a factor of two for such manipulations.


Often, no, because the compilers don't do anything special enough for
you here. It is more a question of cache size, speed to the stacks,
qty of stacks, and of course speed of the processor.


It isn't the compiler [assuming it compiles 64-bit targets], it is the CPU.
Working with a 64-bit word on a 32-bit CPU requires to steps instead of one.
Any compiler that compiles 64-bit targets will do this correctly.

Yes, given a good "chessie" architecture you may see a speed advantage,
but we are WAY less than 2x away from the next ply. So 2x gives you,
essentially, nothing. It can improve speed chess, but essentially you
are just lowering and lowering the amount of time that it takes to get
to xply.


I wasn't indicating a 2x performance overall, I was indicating a 2x
performance on bitboard manipulations. There is a LOT more involved in chess
programming than simple bitboard manipulations. What I want to know is how
the entire package performs; hence my question.

Unfortunately it is not going to improve the quality of either normal
controls, or overnight analysis. Because you are still too far away
that 2x isn't going to make such a difference. Nor 4x, like with the
transmeta processors. And other unix processors.


I have since done some reading and there is indication of increased
performance simply because the application can access more the 2G of RAM [my
box is 4G, so I am curious about testing this]. I think increasing the hash
table capability and faster bitboard processing ought to result in a
significant and measurable performance increase. I am hoping for a real-world
measurement of this.

Hydra and deep blue work/worked well because they were 2+ MAGNITUDES
deeper, not 2x deeper. And today computers and engines are essentially
all at the same level, about 8 moves, 16 ply (i think those are the
right numbers, we may actually be slightly deeper than that, but it
doesn't matter, because the limit is there whatever the actual number
is). That works out to about world class good move level, and no real
"mistakes". And every ply added more points and deeper (faster) was
better than smarter. But each ply is EXPONENTIALLY farther away.
Computers are not getting exponentially faster anytime soon, and for
awhile there was bunches and bucket loads of programming cleverness that
wrung every cycle of the processor in the quest for deepness. This was
hugely important while computers were slow, and it is the current state
of the art.


As I said, I didn't say anything at all about 2x deeper ... I didn't say
anything at all about ply. I said 2x increased performance in bitboard
manipulations simply because the CPU can now work with a 64-bit word natively
in one call instead of two [working with two 32-bit words instead for a 32-bit
processor].

snip

You still haven't answered my question. I am looking for real performance
measurements ... what are people seeing? Even engine versus engine match
results on Playchess would be an interested statistic for me if I knew the
hardware configuration behind it.

Thanks!

--
Thomas T. Veldhouse
Key Fingerprint: D281 77A5 63EE 82C5 5E68 00E4 7868 0ADC 4EFB 39F0


  #7   Report Post  
Old January 12th 07, 03:22 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: May 2006
Posts: 96
Default 64-bit chess engines

In article ,
Johnny T wrote:
Yes, given a good "chessie" architecture you may see a speed advantage,
but we are WAY less than 2x away from the next ply. So 2x gives you,
essentially, nothing. [...]


Can you explain what you actually meant here, please? Your
*particular* program may just have crept over [eg] a 16-ply boundary
for a *particular* position on a *particular* computer; *mine* may
be just below or well above, depending on the computer [inc, eg, the
available hash storage, as well as the CPU speed/type and cache], on
the program [which may have different pruning strategies as well as
more or fewer "smarts"], even for the same position. And your program
and computer combination will surely see further in simple or forcing
positions than in complex or subtle positions. So a factor 2x in speed
may not get you to the next ply *uniformly*, but it could mean that
instead of [eg] 40% 15-ply, 50% 16-ply, 10% 17-ply, you see 10% 15-ply,
50% 16-ply and 40% 17-ply. That would be an extra ply in 60% of the
positions your program meets.

How much that buys you in Elo points is another matter.

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.

  #8   Report Post  
Old January 12th 07, 08:44 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Dec 2006
Posts: 10
Default 64-bit chess engines

It is just not as close as you think.

A quick review. Imagine a given chess engine and it runs on a 1 Mhz
machine, a commodore 64 for instance. If you played lets say 1000
games on it you would find that the computers move choice would both
change at predictable intervals, and that the quality of the move,
measured against human opponents, would get better over time.

Those predictable points are the ply levels. At every ply the horizon
for the computer goes farther and farther out, and as it goes farther
out the quality of the move improves, merely by eliminating mistakes.

----
As an aside in the very beginning, it was an open issue whether better
moves would lead to a stronger program, than one that made fewer
mistakes. Chess was largely found to be a game of mistakes (as many
are). And it was easier for computer programmers to make less mistakes
than better chess moves.
----

That was fine, but each ply was exponentially further away. And so the
problem is ultimately not solvable in this direction. But the world
wanted to see what would happen.

Well, 2 things happened, we created very clever representations and
implementations of the game that dramatically sped up its processing,
(by orders of magnitudes), and moores law increased the power of our
machines by 3 orders of magnitude, from 1mhz to 1000mhz.

But just as we think that most if not all magnitude class programming
optimizations have been discovered and implements, it is just a lot less
likely to get out to 10000mhz on a given machine, much less 1000000mhz.

And just as it is starting to slow down, since ply is exponential, a
single magnitude improvement from here, may not include a single ply in
depth. Which means even if my computer is 10 times better, at normal
time controls, the chess gets no better at all.

But in the beginning. Performance was everything. Getting the
additional ply was possible. And those fewer mistakes meant my computer
was getting stronger, the analysis was better, and we were passing IM's
and then GM's and then the super gm's and finally world championship
caliber, over a match.

And then....

We had achieved the last ply and we look out, and the gulf between here
and the next one, is beyond nearly everytrick in the book. (except
customized massively parallel hardware, hydra and deep blue). And even
then our current laptops, just a little smarter than the hydra or deep
blue in it's understanding, and only 1 ply less deep, maybe... Still
world class.

Here we sit. Nowhere to go predictably but sideways. Will the programs
change much? Will the be objectively better? Does it matter? THESE
ARE IMPORTANT QUESTIONS, and really, very smart, very clever people have
been thinking about this for a very long time, and if 64 bit was the end
all be all, or if it really mattered at all, we would have already been
there. The fact is, it doesn't get us ANYWHERE near close enough to
the next ply to even matter. As a matter of fact, there is good reason
to believe it provides nearly nothing. Since deeper doesn't matter, and
we have passed that threshold a while ago. And what use to take a full
machine to be world class, now takes a half a machine. And it doesn't
matter, the second half doesn't change the answer of the first half.

So, what happens now. Well what happened is fritz 10, and Rybka. And
Rybka is probably the most important. A young good programmer, and a
young very good chess player got a hold of modern chess code. (not
crafty, but fruit?!?). And he got to mold it with his more profound
chess knowledge. This meant that instead of just making the least
mistakes at a given ply, his program could make MUCH better positive
contributions to each move. And the reason this was even possible was
due to the large amount of excess computing power available between this
ply and the next ply. And that this combination, between world class
depth, and world class depth+1, with ever more powerful chess knowledge,
means that at this point we have a much different way to program that
computer, and play the game. And at this point computers will
dramatically impact how the game is played.

So, in sum. No, we are not anywhere close enough to the next ply that
64 bit or long addressing matters with current engines.

But, newer engines that utilize the overhead for smarts instead of
depth. Twice as much room, may be still quite useful. We are in the
adolescent stage of this type of optimization. (We had thought we had
been here before, in the Rebel and early Hiarcs days, but a few
breakthroughs later we won a couple more ply and the "smart" programs
were being beat. And we also learned/believed that the smarts were
only worth xply.) There were other interesting political issues.
Breakthroughs were no longer being shared. It was assumed that the
smart programs would get faster at the same rate as everyone else
because the techniques would be revealed. But many or the chess
optimization techniques became VERY proprietary and secret. The release
of fruit?! was a dramatic change to that. There is more to this story I
believe than has been fully told. But these politics help lead to the
stagnation.

But we now have different kinds of smarts. Much more complicated, more
profound in affect. And... A new race. But the new race has just
started. 64 bit may be very useful to rybka (where it might have little
affect on say Junior). But expect to see an new escalation on chess
knowledge and engine to engine matches. It's exciting.
  #9   Report Post  
Old January 15th 07, 05:59 PM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: May 2006
Posts: 96
Default 64-bit chess engines

In article ,
Johnny T wrote:
A quick review. Imagine a given chess engine and it runs on a 1 Mhz
machine, a commodore 64 for instance. If you played lets say 1000
games on it you would find that the computers move choice would both
change at predictable intervals, and that the quality of the move,
measured against human opponents, would get better over time.


??? Is this on the assumption that your machine is slowly
increasing in speed? Otherwise it's hard to make sense of what you
are saying. But it so, then ...

Those predictable points are the ply levels. [...]


... you seem to have a wrong mental model of how all good
programs play. If you speed up your engine by a factor of 2, it
will, other things being equal, analyse twice as many nodes when
playing at a given rate. That won't give you an extra ply in all
positions; but it *will* give you an extra ply in those positions
where the decision not to analyse deeper was by a factor less than
two. [As a rough guide, that should be between 30% and 60% of
positions in the case of classical chess.]

And just as it is starting to slow down, since ply is exponential, a
single magnitude improvement from here, may not include a single ply in
depth. Which means even if my computer is 10 times better, at normal
time controls, the chess gets no better at all. [...]


If your computer is 10x better, then convention hath it that
you should get between 1.5 and 2 ply deeper. How much that pays off
in terms of "skill" is perhaps another matter. And you may well be
right that Moore's Law [which has been giving us exponential increases
in speed as well, so an extra ply every 3 years or so] is breaking down;
but the doomsters have been saying that for decades ....

We had achieved the last ply and we look out, and the gulf between here
and the next one, is beyond nearly everytrick in the book. [...]


Still just an extra factor of between three and five ....

So, what happens now. Well what happened is fritz 10, and Rybka. And
Rybka is probably the most important. A young good programmer, and a
young very good chess player got a hold of modern chess code. (not
crafty, but fruit?!?). And he got to mold it with his more profound
chess knowledge. [...]


Well, that's a different matter. It has probably been true for
well over a decade that there was more to be gained from intelligent
static evaluation than from an extra ply or two. [And even more from
an intelligent notion of what it is that makes some moves irrelevant
and others vital, enabling more reliable pruning.]

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.

  #10   Report Post  
Old January 16th 07, 04:16 AM posted to rec.games.chess.computer
external usenet poster
 
First recorded activity by ChessBanter: Dec 2006
Posts: 10
Default 64-bit chess engines

Dr A. N. Walker wrote:
In article ,
Johnny T wrote:
A quick review. Imagine a given chess engine and it runs on a 1 Mhz
machine, a commodore 64 for instance. If you played lets say 1000
games on it you would find that the computers move choice would both
change at predictable intervals, and that the quality of the move,
measured against human opponents, would get better over time.


??? Is this on the assumption that your machine is slowly
increasing in speed? Otherwise it's hard to make sense of what you
are saying. But it so, then ...

Those predictable points are the ply levels. [...]


... you seem to have a wrong mental model of how all good
programs play. If you speed up your engine by a factor of 2, it
will, other things being equal, analyse twice as many nodes when
playing at a given rate. That won't give you an extra ply in all
positions; but it *will* give you an extra ply in those positions
where the decision not to analyse deeper was by a factor less than
two. [As a rough guide, that should be between 30% and 60% of
positions in the case of classical chess.]


Actually, I am quite sure that you have the wrong mental model. 2x will
not change the move in 50% of the cases, nor will 10x give your 1.5 to 2
ply. This is because the depth is exponential (let that word sink in
for a minute).

We are not getting there by hardware, and it is likely there are no new
huge programming breakthroughs on depth.

I will give you that on some really small number of cases that some
speed, today, may result in a change in moves. (It only matter's if the
move changes...). But that may result in 1 game change. That is going
to be difficult to see over variance. But I am not even going to give
you that soon.

It just takes too much to get to an answer that is different than the
current answer. The problem is just too deep. Beyond 1 magnitude away.

You are just wrong about that 10x and factor of 3-5. It is no longer
true. Because the depth problem is NOT linear. It is not even
geometric (as you imply), it is exponential. It is beyond our depth.
Fortunately (?) this also coincided with world class play. Even more
fortunately, was the Rybka development. This accident could have easily
not happened. They/he got depth for "free". And has been able to add
smarts. Very interesting.

And skill, has been more about not making tactical errors. And gross
strategic errors. Which has depended on depth or ply to "see". Now we
are moving to a stronger player with equivelant depth.



If your computer is 10x better, then convention hath it that
you should get between 1.5 and 2 ply deeper. How much that pays off
in terms of "skill" is perhaps another matter. And you may well be
right that Moore's Law [which has been giving us exponential increases
in speed as well, so an extra ply every 3 years or so] is breaking down;
but the doomsters have been saying that for decades ....

We had achieved the last ply and we look out, and the gulf between here
and the next one, is beyond nearly everytrick in the book. [...]


Still just an extra factor of between three and five ....

So, what happens now. Well what happened is fritz 10, and Rybka. And
Rybka is probably the most important. A young good programmer, and a
young very good chess player got a hold of modern chess code. (not
crafty, but fruit?!?). And he got to mold it with his more profound
chess knowledge. [...]


Well, that's a different matter. It has probably been true for
well over a decade that there was more to be gained from intelligent
static evaluation than from an extra ply or two. [And even more from
an intelligent notion of what it is that makes some moves irrelevant
and others vital, enabling more reliable pruning.]


The decade may be right. I am sure that it is closer to 5 years. And
it is not about pruning (which has to do with depth), it has to do with
"quality" and move score which has more to do with move scoring... It
is not about finding better depth and better pruning, it is about
creating better tableaux.

Prior to this, having the deepest horizon that didn't result in tactical
failure was where all the strength was. It also terminated around world
class strength. This is fortunate only. And a bit odd. Checkers
achieved it way earlier, go may never achieve it. Chess is right on the
edge. And chess might get significantly better on the trip to the
last ply. And it will be really interesting if the last ply doesn't
result in different answers. Or at least not many.
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
"Swedish Chess", by Mats Winther Mats Winther rec.games.chess.computer (Computer Chess) 0 May 25th 06 11:09 AM
rec.games.chess.misc FAQ [2/4] [email protected] rec.games.chess.misc (Chess General) 0 February 19th 06 05:44 AM
rec.games.chess.misc FAQ [2/4] [email protected] rec.games.chess.misc (Chess General) 0 November 18th 05 05:36 AM
rec.games.chess.misc FAQ [2/4] [email protected] rec.games.chess.misc (Chess General) 0 November 3rd 05 05:30 AM


All times are GMT +1. The time now is 12:18 PM.

Powered by vBulletin® Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
Copyright 2004-2019 ChessBanter.
The comments are property of their posters.
 

About Us

"It's about Chess"

 

Copyright © 2017